Showing posts sorted by relevance for query how much bandwidth. Sort by date Show all posts
Showing posts sorted by relevance for query how much bandwidth. Sort by date Show all posts

Monday, February 11, 2008

How Much Bandwidth is Enough?

Nobody yet knows how much Internet access bandwidth a typical user will need in the future, at peak times (average usage doesn't much matter). It is easier in many ways to model bandwidth requirements for entertainment video services. 

If a provider uses a "broadband" approach(in the sense of all linear channels being delivered to the user, whether or not the user is watching), it is a simple matter of ascertaining how many discrete video feeds one wishes to deliver, how much bandwidth each feed requires, and then doing some simple multiplication. 

 If one then decides to deliver all on-demand programming, one needs a switching infrastructure, and then must make some assumptions about simultaneous peak viewing. Will a typical user, at the peak viewing hour, want to watch one feed, two feeds or three, keeping in mind that one of the feeds might be recorded for later viewing while a second is actively watched. 

The Internet access portion of the planning exercise is more murky, but still hinges on video behavior. If business logic allows it, users might be able to stream video, even HDTV video, over IP connections. Whether this bandwidth is of the "public Internet" type or the "walled garden" type is less important, in some sense. 

 Assuming there is a revenue model, how much bandwidth must a service provider be able to provide? Whatever end users may think, a service provider will deliver bandwidth in amounts that allow it to make money, and no more. 

 So asking how much bandwidth users may want is probably less important than how much they are willing to pay to get that level of bandwidth. 

And so far, few users seem to have shown a willingness to spend hundreds of dollars to get symmetrical bandwidth, whether that is a T1 connection or a 50 Mbps symmetrical service from SureWest Communications. To use the old but useful analogy, all of us might enjoy driving a Lexus. But not all of us do. We solve our transportation problems, but not always with a Lexus. In principle bandwidth ultimately will represent that sort of choice as well. 

Just about anybody can buy a T1 connection today. But not all businesses do so, and few consumers do so. Granted, the bulk of consumer bandwidth requirements still will remain of the asymmetrical sort (barring a massive switch to peer-to-peer), so symmetrical bandwidth might not be the best analogy. Still, the question remains: how much bandwidth will consumers pay for? "Need" is it that sense a subsidiary question. 

There's no question typical consumers are showing a clear preference for paying more for higher bandwidth. The issue is the elasticity of that demand as service providers start to move into the "scores of megabits" range, and then contemplate bandwidths an order of magnitude higher than that (100 Mbps or more). 

 If one looks simply at the price-per-megabit, users have shown a wide willingness to pay $50 to $100 a month for unrestricted use of 200 Mbps to 500 Mbps of linear video (with implicit quality of service assurances). 

 They likewise have shown high willingness to pay $50 a month for a few megabits to several megabits per second of interactive Internet access bandwidth in the downstream direction, with no quality of service assurances. 

 Assume that most also have been willing to pay $50 a month or so for a wireline voice connection and you are looking at $150 to $200 worth of monthly revenue for services offering several hundred megabits-per-second of downstream bandwidth, plus services on top, using a highly asymmetrical network. 

That does not leave lots of headroom for networks that deliver more symmetrical bandwidth (scores of megabits per second in the upstream and hundreds of megabits per second for linear and on-demand video plus 100 Mbps for interactive applications). 

 In the consumer markets, the rule of thumb has been that $10 a month of incremental spending is a big deal. Still, shown a value proposition high enough, even $50 a month in incremental spending now has become fairly commonplace. 

So the issue might be more "how much will consumers pay?" rather than "how much bandwidth will they need?", as important as that question remains. 

There always are trade-offs engineers can make: bandwidth versus processing, processing versus storage, non-real-time versus real time, bandwidth versus image quality and so forth. Ultimately, consumers are going to drive access bandwidth with their wallets.

Friday, May 1, 2009

100 Mbps is Really Nice: How Many Really Need It?

Aside from the general observation that marketing bragging rights are a key reason for touting really-fast broadband connections, one wonders how much real value the typical consumer customer gains.

Cablevision Systems, for example, is on the verge of launching a 101 Mbps (downstream) service costing $99.95 a month. Other service providers have been marketing 50-Mbps services (downstream).

But one wonders how much traction such services will get in the consumer space. To be sure, a 101-Mbps access connection better matches common in-home or on-premises bandwidth supported by Wi-Fi routers.

But it remains unclear how much incremental value the additional bandwidth provides, as many factors affect perceived performance. It won't help to have a really-fast access connection if the servers holding the content one wants to access are not capable of spewing out bits equally fast, if the backbone networks are congested or if there are end user device limitations.

A single user on such a connection (50 Mbps to 100 Mbps) might not have an experience any different from a user with a 10 Mbps or 20 Mbps connection.

To be sure, one can note that bandwidth requirements keep growing. The change from text to graphics generated a one to two order of magnitude increase in bandwidth requirements, for example. A text screen is typically 400 bytes, while a graphic screen can be 50 Kbytes to 100Kbytes, an increase of 10-100 times.

Similar changes can be noted for streaming audio or video. So bandwidth requirements will increase over time. The issue is how much, and for whom.

Locations with many users to support--businesses and large families--will have higher requirements.  In some cases, a family might benefit from having that much bandwidth if multiple users, working simultaneously, frequently are downloading or streaming high-quality video, for example.

Still, one fact is incontestable: the larger the degree of sharing, the more efficient the multiplexing becomes. The ability to share bandwidth becomes more efficient as more users are sharing any single link. So the increased demand will not be linear.

A typical single user, on a single link, might not require so much. By the same token, it isn't clear how much bandwidth a single user, on a single link, actually benefits from really-fast connections, beyond a certain point, as other variables also condition and limit the experience.

That isn't to say access bandwidth requirements are not growing, or to argue those requirements will stop growing. Having the opportunity to buy faster connections is valuable for some end users, particularly those with multiple users in a single household. It is much less clear how much additional utility is gained by the typical single user.

Tuesday, June 22, 2010

How Much Speed is Enough?

Lots of people think 100-Mbps or 1-Gbps access services are the wave of the future. One facetiously wonders whether they might always be. Mostly everybody likely agrees that bandwidth requirements are growing, and that "more" bandwidth is a good thing. The problem is that it is hard to answer the question of "how much is enough?"

StarHub, for example, already offers a commercial 100-Mbps service, and sells the "MaxOnline Ultimate" service for $62.40 a month, in Singapore.

Only five percent of customers have bought it, says Neil Montefiore, StarHub CEO. "I'm unconvinced about consumer demand for 100 Mbps."

U.S. access providers who already sell 50 Mbps or 100 Mbps connections seem to have had the same results. When it is available, relatively few customers choose to buy services running at such speeds.

"No one is quite sure what people will do with 100-Mbps symmetrical," he said. "Do people really need that speed?" The other issue is whether raw bandwidth of very-high capacity is sufficient, rather than merely necessary, to ensure creation of compelling and useful applications and services. In other words, higher bandwidth is needed as a prerequisite for valuable new apps. But it isn't so clear that where 50 Mbps or 100 Mbps access is available, that much of anything noteworthy has developed, beyond what could be done at 10 Mbps or 20 Mbps, for example.

The other question is how much demand there is for very-high-speed services, even when prices are reasonable.  If customers can buy 100 Mbps for about $63 (U.S. currency), but they can buy 50 Mbps for $50, is the issue the extra bandwidth or the value-price assessment which leads people to conclude that high bandwidth, but not super-high, is a better deal, and sufficient to accomodate their needs.

Consumers can buy 16-Mbps service for about $37 a month, as well, or cheaper 3 Mbps or 6 Mbps services.


German cable network operator Kabel BW claims that around 40,000 customers are using broadband with speeds of 50 Mbps or 100 Mbps. About three million homes are able to buy service at those rates. So buyers represent about one percent of customers.

Also, the price for the 50-Mbps access service is about $41 a month. What is not clear is what percentage of those buyers actually are businesses, rather than consumers.

It is a laudable thing to call for 100 Mbps service, available to most U.S. users, by 2020. What is missing at this point is evidence of robust-enough demand for speeds of 50 Mbps, at $100 a month.

Kabel BW has found only about one percent take rates, at prices of $41 a month. Obviously, no investor in his or her right mind would loan money to a service provider to offer 50 Mbps service at the same prices as users presently pay.


A new survey by Leichtman Research Group finds that 71 percent of U.S. broadband Internet subscribers are very satisfied with their current Internet service at home (rating satisfaction 8-10 on a 10-point scale), while just three percent are not satisfied (rating satisfaction 1-3).

To be fair, with broadband, appetite changes over time. But the issue is how to match actual demand, at market prices, to the amount of bandwidth that should be delivered.

While 77 percent of broadband subscribers do not know the download speed of their Internet service at home, they are generally pleased with the speed of their Internet connection. Overall, 66 percent of broadband subscribers rate the speed of their connection 8 to 10 and six percent rate it 1 to 3.

The findings are based on a telephone survey of 1,600 randomly selected households from throughout the United States. The survey also found that more than 70 percent of respondents said they subscribed to a broadband service.

Some 26 percent of broadband subscribers are very interested in receiving faster Internet access at home than they currently receive (rating interest 8-10 on a 10-point scale), while 44 percent are not very interested (rating interest 1-3).

Of all Internet subscribers, three percent of respondents say that broadband is not available in their area. In rural areas eight percent of online households say that broadband is not available in their area.

Overall, 1.4 percent of all households are interested in getting broadband, but say that it is not available in their area. Less than one percent of all households are interested in getting broadband, but cite cost as a reason for not currently subscribing to broadband.

Nobody can tell "how much bandwidth is enough." For the moment, though, the evidence here seems to suggest that there is not huge pent-up demand for dramatically-faster speeds. So far, the evidence from markets such as Singapore and other U.S. areas where either 50 Mbps or 100 Mbps is available for purchase, does not support the thesis that dramatically-higher speed is a huge need, at the moment, at least at prices far lower than they presently are.

Everyone expects demand for bandwidth to keep expanding. What seems less clear is the pace of that growth.

Saturday, January 19, 2013

Is Usage-Based Internet Access Inherently Unfair?

Though understandable, given the “no incremental cost” nature of much Internet content, information and applications, one might argue the way many think about the Internet is out of sync with the way they think about most other products they buy and use. 

Most of the criticism about usage-based pricing is that it somehow is "unfair." Much of the criticism takes the form of complaints about ISPs somehow taking advantage of consumers. It is argued there is no need for metering, for example.

In other cases, some critics imply or allege that metered pricing is simply a way for ISPs to make more money from their customers.

Are usage-based charging mechanisms inherently unfair and detrimental to continued development of the Internet? Some think so. And there is Internet precedent for such thinking, to be sure. AOL found usage exploded when it, and other dial-up access providers, shifted from metered usage to flat fee pricing.

One might object that this encouraged use of the Internet but at the “expense” of increased direct costs for Internet access providers. So there is good reason to argue that directly metered use of Internet access might actually discourage people from using the Internet.

But that isn’t generally the way usage is rated, these days. Consumers generally understand and seem comfortable with “buckets of usage” that provide cost predictability, but also allow users to buy less or more access in line with their needs.

Usage based pricing might actually be a good thing for the overwhelming number of consumers, to the extent that lighter users pay less, heavier users pay more, and suppliers have accurate information about how much more capacity to add, where and when, which in turn ensures that investment is adequate to support anticipated growth of demand.

In fact, one might argue, the worse scenario is where usage and pricing are not related in some relatively direct way, as that distorts both demand and supply.

One frequently hears warnings about outsized growth of broadband access demand, the implication being that a crisis might develop if “something is not done.” Some predict that 1,000 times more mobile bandwidth will be needed by 2020, for example.

But both suppliers and consumers are rational about their bandwidth choices, when there is a clear link between consumption and out of pocket costs, and when consumers can act on that information.

Even if future supply were not an issue, it would still make sense to allow consumers to make choices about how much “Internet access” they really want to purchase, as that would send clear signals to suppliers about how much to invest in new capacity..

The problem with “unlimited” plans is that such retail pricing does not automatically send accurate supply and demand signals, and does not trigger the normal decision-making consumers always make when considering how much of any product to buy.

Nor do we often remember that demand for Internet access is dynamic, not static. Raise the price, and consumers will buy less, lower the price and they will buy more.

To an extent, changes in device profiles also make a difference, as typical bandwidth consumption on a PC is far higher than on a smart phone or a tablet.

And users clearly are shifting Internet activities to smart phones and tablets. At some point, that could slow data consumption growth rates, even if, over time, bandwidth consumption grows.

Demand will grow, but probably less robustly than many forecasts predict. Mobile data consumption, even among smart phone users, is well below 1 Gbyte a month, according to Sandvine.


An analysis by the U.S. Federal Communications Commission suggested that, in the first half of 2009, the median fixed network (half used more, half used less) broadband user consumed almost two gigabytes of data per month. Mobile users consumed only hundreds of megabytes.

The 2009 study suggested that, overall, per-person usage is growing 30 percent to 35 percent per year. That doesn’t necessarily directly suggest how much an “account” or “home” might consumer, though.

The FCC study does not directly correlate a single person’s usage with the account details, as it is a “per-capita” measure. Such “per-person” measures are useful, but not entirely accurate if services are purchased “by location,” instead of “by person.”

n other words, a single user might have one access account, while a family might have three to five people sharing a single account.

As a rough metric, a typical 2.5-person household, sharing one account, might have consumed about six gigabytes a month, based on the 2009 data.

If the 30 percent annual growth rate remained intact through the end of 2012, that might imply 2014 median usage of about seven gigabytes per person, or 17.5 Gbytes per household account, using the 2.5 persons per home assumption.

Other 2010 estimates for current consumption were roughly in the same range as the 2009 FCC figures, adjusted for annual growth.  Comcast said in December 2010 that a typical user consumed about two to four gigabytes a month, far below the 250 gigabyte cap for a Comcast residential account.

That would be right in line with the FCC’s base of two gigabytes, and a growth rate of 30 percent annually.

Actual data consumption for most users of fixed network broadband is not all that high, in other words. True, demand will grow. But so long as price signals can be sent, supply should satisfy demand.




Sunday, May 27, 2018

Spectrum Supply and Demand are About to Go "Unstable"

It always is possible to get a robust debate about "whether enough spectrum is available." It might soon be possible to get a robust debate on whether spectrum prices will drop, based on increases in supply.

On one hand, demand keeps growing, so even if orders of magnitude new supply are added, supply and demand should remain in equilibrium, where prices are stable and supply matches demand, as economists like to say. 

On the other hand, many of you might look at your own experience in the communications business and not agree that the business is in equilibrium, and that applies to the value and price of acquired spectrum, as well as expectations about value and price as markets evolve. 

For example, though it is hard to place a financial value on mobile operator, business or consumer end user access to Wi-Fi, the ability to offload huge amounts of mobile phone internet access demand to Wi-Fi has a clear value to network operators who do not have to invest as much in capacity as they otherwise would have to do. 

In the U.S. mobile market, various participants also see multiple ways to increase the effective amount of capacity they can use. Buying new spectrum licenses is but one way to do so. In other cases a shift to small cells is seen as a reasonable alternative to acquiring new spectrum. That is basically Verizon's stated position.

In other cases, firms can buy companies that own licenses, as both Verizon and AT&T have done, and others might do. Dish Network has a trove of spectrum which must be put to commercial use or the licenses are lost. So many believe Dish ultimately will sell the licenses to a firm that can do so, fast. 

Mobile service provider mergers or acquisitions are another way to acquire additional spectrum.

But new techniques are coming, including the ability to aggregate unlicensed spectrum with licensed spectrum; access to shared spectrum that might cost less, or be accessible in unlicensed mode; plus huge increases in the amount of licensed and unlicensed spectrum available for mobile and other uses. 

So a good argument can be made that spectrum equilibrium is less likely. 

Recent spectrum auctions have diverged from expected sales values. In the past, mobile operators also have paid too much for spectrum.  

Recent U.S. spectrum auctions show mobile service providers being much more cautious about what they are willing to spend on buying spectrum licenses. The same trend was evident in recent spectrum auctions in India as well.  

In part, that is likely due to a perception that there are other ways of sourcing additional capacity, from aggregating unlicensed spectrum to use of smaller cells to shared spectrum or acquiring assets already awarded, but not yet in use. In some markets, spectrum trading also is a solution.  

But it also is possible that the perceived value of spectrum--still high--also has to match with expectations about the amount of revenue incremental spectrum can generate. If operators believe 100 new units will not drive the same amount of revenue as in the past, then their willingness to invest in spectrum will be less, on a per-unit basis.

Also, coming physical supply is disruptive, to say the least. All presently-licensed mobile spectrum, plus all Wi-Fi spectrum, plus new shared spectrum, amounts to about 2,600 MHz in the U.S. market. The actual mobile and Wi-Fi spectrum is closer to 800 MHz to 1,000 MHz.

But the Federal Communications Commission is releasing an order of magnitude more physical spectrum; much unlicensed; with possibly two orders of virtual capacity increases; plus spectrum sharing; plus small cells; plus better radios, is bound to be disruptive.

Supply and demand is at work, in other words. And if supply increases by

So how much will 5G change service provider spectrum valuation and asset models? Quite a lot. In fact, say consultants at Deloitte, “5G changes everything,” they say. That might be a bit of hyperbole, but the point is that there is greater uncertainty, for several reasons.

For starters, it is an underestimated fact that the value of spectrum licenses is part of the equity value of any public mobile service provider company.

Spectrum licenses account for “an average 35 percent of the assets of US WSPs (wireless service providers), and close to 20 percent of WSPs elsewhere, according to consultants at Deloitte.

But present valuations are assigned at original purchase value, and therefore might actually be different in an era of growing spectrum need and supply. At one level, the potential mismatch is easy to illustrate.

The value of assets for which an operator overpaid represents more value than similar assets for which an operator paid less, even if the assets acquired at lower cost might be equally, or more, valuable. So accounting “fiction” is at work.

Still, historically, rights to use mobile spectrum have been fundamental drivers of the ability to be in the business and earn revenue. But there are new questions in the 5G and coming eras, as the supply of spectrum (physical and virtual) is changing by orders of magnitude.

And how does one account for the value of being able to offload traffic to Wi-Fi? That avoided capital investment is worth something, but how much? And even if valuable, can it be reflected in an assessment of equity value?

Scarcity also matters. Historically, mobile spectrum has had value in two or more ways. It has been the necessary precondition for conducting business and satisfying demand. But it also has been a means of denying competitors access.

Licensed spectrum has been a driver of scarcity, and therefore equity value.

Deloitte argues the value of spectrum is presently undervalued. On the other hand, one might argue that so much new spectrum is coming, and the ways to use unlicensed spectrum also multiplying, that old rules of thumb about value and pricing do not work so predictably.

Cable operators, for example, clearly see lots of value in using their distributed public Wi-Fi nodes as infrastructure for their new mobile services. The “Wi-Fi first” access model does reduce either capex or wholesale capacity purchases or both.

And though the correlation is not linear, since mobile operators can increase capacity in other ways, the amount of spectrum a mobile operator can deploy is linked to the amount of revenue it earns. But each contestant has other assets to deploy (capital, brand, scale), so the relationship is not linear and causal.

In each market, some operators earn more revenue than others, for reasons including, but not limited to, the amount of spectrum they can deploy.

The point is that it is no clear whether spectrum presently is undervalued or not. The harder question is how to value such assets in the future, when the amount of supply--ignoring quality issues--is going to increase by an order of magnitude, and the effective capacity is going to increase by possibly two orders of magnitude.

Qualitative changes also will matter. Most internet of things apps will not require much bandwidth. And much bandwidth presently consumed across the backbone might in the future be cached and processed at the edge of the network. That will shift the bandwidth demand curve in significant ways.

On the other hand, if mobile networks are to challenge fixed networks as platforms for consumer internet access, then lots of cheap new bandwidth will be necessary, so mobile alternatives can offer comparable bandwidth and prices. Lower bandwidth costs are coming, in the mobile area, driven by platform improvements, more and more-efficient spectrum assets, use of small cells and shared, unlicensed and aggregated spectrum options.  

If mobile bandwidth traditionally has been an order of magnitude more expensive than fixed network bandwidth, then it is obvious that, to compete, mobile bandwidth has to be as capacious and affordable as fixed network bandwidth.


Up to this point, mobile cost per gigabyte has been as much as an order of magnitude more costly than fixed network cost per gigabyte. That is going to change.

Tuesday, December 28, 2021

Why Some Users Find 5G Unsatisfying

5G value is an issue for some users who have bought it, especially in some markets where low-band spectrum has been the way 5G is mostly experienced. But there arguably are reasons why user experience could be challenged even in markets where mid-band spectrum underpins 5G experience.


One reason is the difference between what users do--and what the networks must support--on fixed and mobile networks. Fixed networks are multi-use networks. So the obvious value in a fixed network setting is "speed" or "bandwidth" to support multiple simultaneous users.


That is not the case on mobile networks, where accounts are set up on a one device, one user basis. Even when there are multiple users on a single account, those users do not "share" a local access connection. So the advantage of "speed" is different on a mobile network.


There is no "sharing" of a single connection. Also, fixed networks support screens of many sizes. Mobile networks mostly support very-small screen devices. That shapes bandwidth demand.


Apps typically used on large screen or medium-screen devices further shape bandwidth demand. Entertainment devices such as 4K TVs will consume more bandwidth than standard-definition or high-definition viewing on very-small screens.


Mobile-connected devices supporting artificial reality are the exception, at the moment, but also are relatively rare. And even many of those use cases rely on a local Wi-Fi connection, not the mobile network.


Up to a point, bandwidth affects user experience. Just as surely, additional bandwidth does not improve experience, once a threshold is reached. Latency and jitter also matter, but users might not be able to discern such changes, or wrongly attribute the lack of perceived improvement to "bandwidth" issues.


But if 4G provides any evidence, 5G value is going to change over the lifespan of the network. 


The initial value will be “speed,” even if user experience is less changed than some will expect, even if the perceived value is the marketing value of 5G delivering data faster, irrespective of user experience value.


The value after a decade will be “new use cases” and apps, for consumers and business use cases. But that will take time. And consumers might well find there is "not much difference" between 4G use cases and new 5G apps. They have not been created yet.


The betting early on is that many--perhaps most--of the new use cases will come from enterprise, not consumer uses. 


After a decade or so, we are likely to have discovered new consumer apps as well. It just is hard to say what those mass deployed use cases will be. Perhaps nobody predicted the emergence of ride sharing as an important 4G use case. 


Few predicted turn-by-turn navigation would be important. And though streaming video and audio were foreseen, even those apps do not rely so much on “speed” as the creation of easy-to-use and popular streaming apps.


In fact, the rise of “mobile-first” apps does not depend, strictly speaking, on bandwidth improvements brought by 4G, though faster speeds are an enabler. 


That would not be unusual for a next-generation mobile network, up to a point. If nothing else, coverage is an issue, early on. Even a better network does not help if it is not “generally available.”


Complicating matters is the rollout of 5G during the Covid pandemic and many restrictions on “out of home” and “on the go” usage. Working or learning remotely, many users likely spend most of their time connected to home Wi-Fi. So even if 5G is faster, the amount of time any single user might use it is far more limited than under normal circumstances. 


Still, faster speeds should help, up to a point, with existing applications, as page loading on a 600-Mbps fixed network connection should provide some noticeable advantages compared to a 300-Mbps connection (especially in multi-user and simultaneous multi-device usage cases. 


Since 3G, the key user experience gain has been “faster mobile data access.” Sometimes that is tangible; but sometimes not so much.


An argument can be made that latency has even greater user experience impact on a mobile network. Beyond some relatively low point, additional speed might not improve user experience. We can debate what that threshold is, as it changes over time. 


If a consumer’s primary reason for buying 4G was a tethering experience closer to fixed network experience, the 4G advantage was immediately tangible. If the primary advantage sought was mobile web browsing experience similar to fixed network experience, then the advantage might well have been tangible. 


5G poses a bit of a tougher problem. When downstream 4G speeds are routinely in the 20 Mbps to 30 Mbps to 35 Mbps range, how much does experience change when 5G offers 165 Mbps? It should help, but how much?


It depends on what a user does on a phone. Web page loading will be faster, but how much faster? Ignore for the moment the authoring of a web page (optimized for mobile access or not; how well optimized). 


For fixed network access, faster access speeds have not necessarily meant that web pages are loading faster, for example. 


On mobile networks, connection speeds have improved, but mobile page load times tracked by have increased, according to the Nielsen Norman Group.


source: Nieslen Norman Group 


Of course, page and landing page loading times are not a direct function of access speed but perhaps largely an artifact of remote server performance. So access speed is not the only, or perhaps not even primary determinant of user experience. 


The build-out phase of a national next-generation network takes years, so coverage outside of urban cores will typically be an issue. In some markets, where low-band and millimeter wave frequencies have been the mainstay, users might not often find there is much mobile data performance difference.


Wednesday, January 14, 2015

When Does Revenue Per Megabyte Matter?

As a rule, mobile and fixed network Internet service providers must care about revenue per megabyte and cost to supply megabytes, since revenue growth now often is driven by Internet services, and Internet services dominate overall network bandwidth issues.


But mobile now is a multi-product business, and each type of key app has distinct revenue per megabyte profile.


The price of various telecom services varies by as much as four orders of magnitude, per megabyte, with text messaging having the highest profit margin, voice having a high to moderate profit margin, video entertainment having moderate margins, and Internet access having widely-varying margin.


Of course, that is product profit margin, not “profit per megabyte.”


Does that matter? Not in some cases. Text messaging and voice consume such little bandwidth that profit per megabyte is not an issue, though other concerns--such as revenue per unit or unit volume--clearly are issues.


Buckets of mobile data usage likely are not particularly troublesome. So long as the service provider understands its costs, prices reasonably in relation to costs and consumers continue to believe the price-value relationship is reasonable, profit margin should not be a particular issue.


The issue is what happens as consumption continues to grow rapidly and consumption is related in some direct way to cost. And then there is Moore’s Law, and its analogies in the bandwidth business.


Long Term Evolution fourth generation mobile networks are desired for any number of reasons, but among them is network efficiency, often said to be at least 30 percent more efficient at supplying megabytes, in addition to providing higher latency performance.


Some might argue LTE is much more spectrum efficient than that, perhaps as much an an order of magnitude more efficient. Others say LTE is a rather minimally more efficient network .


It might yet be reasonable to argue that more mobile capacity will be gained by use of multiple techniques, though, including new spectrum allocations, spectrum sharing, small cell architectures, modulation and air interface changes, offload, retail pricing and packaging and possibly, in some cases, device or app performance improvements.


One study has shown that some  mobile apps consume significantly more data--seven to 21 times more--than the same content accessed using a browser.  It is conceivable more efficiency could be wrung out of app performance.


Still, the demand side changes will be key. If consumers increasingly rely on their Internet connections to consume video, the amount of data consumed will skyrocket, growing a minimum of two to three orders of magnitude in perhaps a decade. If consumption is a product purchased “by the pound,” that will pose a key challenge.


Consumers are unlikely to spend two to three orders of magnitude more money on their Internet access services.


Unlimited pricing of Internet access is where the clear trouble lies, since the service provider easily could find consumers consuming vastly more data than is matched by revenue, putting huge pressure on profit margins.


The revenue per bit problem is easy to describe in another way, in the fixed network domain.


Assume a fixed network ISP sells a triple-play package for a $100 a month retail price, where each component--voice, Internet access and entertainment video--is priced equally (an implied price of $33 for each component).


How much bandwidth is required to earn those $33 revenue components? Almost too little to measure in the case of voice; gigabytes for Internet content consumption and possibly scores of gigabytes for video.


So, by some estimates, where voice might earn 35 cents per megabyte, revenue per Internet app might generate a few cents per megabyte. Recall that actual revenue per megabyte is statistical: it hinges on how much a user consumes after paying a flat fee for the right to use bandwidth.


There are potential analogies in the mobile segment as well.


McKinsey analysts have argued in the past that a 3G network costs about one U.S. cent per megabyte. The problem, in many developing markets, is that revenue could drop to as little as 0.2 cents to 0.4 cents per megabyte, for any mobile Internet usage.


That implies a strategic need to reduce mobile Internet costs to as little as 0.1 cent per megabyte, or an order of magnitude. Tellabs similarly has warned about revenues per bit dipping below cost per megabyte, leading to an "end of profit" for the mobile business.


But some apps arguably require very low prices per megabyte to be viable products, entertainment video being the best example. In such cases, low revenue per megabyte, or low profit margin per megabyte, is a precondition for offering or supporting the product.


So does gross revenue per megabyte matter? Yes, but less than gross revenue per account, device or line. It is doubtful anybody really cares about voice revenue per megabyte. Revenue per device, yes; revenue per account, yes; revenue per user, yes.


Is profit per megabyte important? Yes, especially for retail plans that feature unlimited usage.


Service providers that have moved to some metered form of usage, where consumption and price are somewhat related, might not have to worry about profit margin per megabyte.  


When revenue per megabyte is very high, application bandwidth is very low, customer demand poses few, if any, peak load issues and marginal cost is negligible. revenue per megabyte is not much of an issue.


When does gross revenue per megabyte matter quite a lot? When revenue per megabyte is low, costs of supplying capacity are high, there are serious peak load issues, marginal cost is somewhat high and unlimited usage is the charging method, revenue per megabyte is an issue.


Also, there are instances where low profit margin actually is the desired outcome. Where the alternative is losing an account, low profit margin might be the preferred problem.


In markets where people are using voice and text messaging less than they used to, the telecom industry’s biggest problem is declining demand--not just profit margin. In such cases, lower revenue per service (especially when incremental bandwidth and other costs are quite low) is better than losing an account, since the incremental revenue arguably is more valuable than the actual profit margin.


Also, it can be very hard to determine what profit margins actually will be, in advance.


In many markets, such as the United States, mobile service now comes with truly unlimited domestic text messaging and voice. Actual profit margin depends on how much people use those services. No matter how low the retail price, if a customer uses very little of the resource (sends and receives few text messages, places and receives few calls), actual price per message, or price per call, can be quite high.


The same is true for many other services, including high speed access. Actual profit is statistical. If a consumer pays $20 a month, and talks 50 minutes, the price per minute is 40 cents. At 300 minutes, the price per minute is about seven cents.


And even if some do use the services at higher rates, the volume does not stress the network, and marginal costs are quite low.


To be sure, there are no telecom products other than content services that show an upward-sloping revenue trend.


Aggregate volume is growing but price per unit has been dropping, for virtually all communication services and products.

There is a key observation, though. So long as telecom services are bought “by the pound,” profit margin should be a controllable issue, So revenue per megabyte always matters, at a high level.

At a more granular level, sometimes low margins are a precondition for doing business, though.

Even if 70% of AI Projects Fail, AI Will Not

AI projects often fail not because the technology doesn’t work, but because the incentives inside the institution are misaligned. And yet, d...