A 2010 study by Ofcom, the U.K. communications regulator, found fixed network speeds were about four times faster than mobile speeds. The difference in page loading speeds was more dramatic. Fixed network web pages loaded 17 times faster than the mobile pages.
But simple logic also suggests that measures of broadband speed are becoming quite a bit more nuanced than in the past, as the “typical” form of broadband access becomes a “mobile” connection, not a fixed line connection.
That does not mean the absolute volume of data consumption is related in a linear way to the number of subscribers, only that “typical access speed” is a harder thing to describe, than once was the case. Some 84 percent of smart phone users say they use their smart phones to access the Internet, for example.
By the end of 2011, total global mobile subscriptions reached nearly six billion by end 2011, corresponding to a global penetration of 86 percent, according to the International Telecommunications Union.
Growth was driven by developing countries, which accounted for more than 80 percent of the 660 million new mobile subscriptions added in 2011. That is significant. To begin with, mobile connections typically run slower than fixed connections, and developing market connections tend to run slower than connections in developed markets.
That might explain why, in the third quarter of 2012, the global average connection speed declined 6.8 percent to 2.8 Mbps, and the global average peak connection speed declined 1.4 percent to 15.9 Mbps, says Akamai.
According to Ericsson, mobile data use has grown exponentially since about 2008.
Wednesday, January 23, 2013
Mobile Broadband Now Shapes Global "Speed" Metrics
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Google Fiber "Is Not a Hobby"
Patrick Pichette Google CFO, said on Google’s recent earnings call that Google Fiber is not a hobby. That could mean lots of things, so ISPs should not necessarily make assumptions about what that statement means.
“We really think that we should be making good business with this opportunity and we are going to continue to look at the possibility of expanding, but right now we just got to nail because we are in the early days,” said Pichette.
“Not a hobby” could mean that Google does not intend to lose money on the venture, and is not simply spending money on a “hero” initiative that is not intended to directly sustain itself over the long term.
Contrast that with Apple’s statements some years ago that Apple TV was, in fact, a hobby, implying that commercial impact was not expected.
But “not a hobby” would unsettle other large ISPs much more if it implied Google was seriously entertaining the notion of becoming an ISP in its own right, on a bigger scale.
Those sorts of fears have been expressed in the past, about Google “becoming a telco.” But Google has become a handset supplier, on a limited scale. Google Voice does earn communications revenue. Google Docs does compete with Microsoft’s “Office” suite.
Google does operate a large global backbone network. Likewise, there are, from time to time, discussions of whether Google (or other big application providers) want to become mobile service providers.
And even at the recent Pacific Telecommunications Council meeting, at least a few attendees I spoke with did express concern that Google might in fact be considering a wider and more significant entry into either the local access or backbone transport markets.
In other words, there remains considerable unease about what Google might decide to do, in the communications business.
The concern might be overblown. But there is no doubt about what Google would prefer, and that is higher speeds for most end users and more investment in access networks by the leading ISPs to enable that.
Google’s challenge to leading ISPs is clear enough.
In a highly-competitive market, the low-cost provider tends, over time, to win. That is true with respect to large tier one telcos competing with large tier one cable operators, for example. You might argue that cable gains in high-speed access and fixed network market share provide a clear example.
Some now would argue that ISPs--both fixed network and mobile ISPs--need to match Google’s own costs, on a gigabyte per cents or gigabyte per dollar basis. How well that can be done, and if it can be done, is the question.
But Google has affected service provider thinking before. Remember several years ago when executives started to routinely say they had to “innovate at Google speed?” Doubtless, most would say no telco really is able to innovate that fast. But it might be argued that service providers do now innovate faster than before.
So it might not be unreasonable to argue that if Google continues to demonstrate new cost models for very high speed access, that service providers will respond.
Shifting to costs equivalent to Google’s costs might be a daunting prospect, but less daunting than what could happen if legacy revenue streams erode faster than new revenue replacements can be created.
It is one thing to argue that telcos, for example, need to incrementally reduce current operating costs. But that argue also hinges on a crucial assumption, namely that current revenue continues to grow on a relatively stable basis, while revenue losses from legacy products do not accelerate in a destabilizing way.
Some might argue that the risk of unexpected revenue trend deterioration is greater than most now assume. In that case, one way or the other, service providers will have to make further adjustments. That is one reason why Google hints that it might expand the Google Fiber program.
“We really think that we should be making good business with this opportunity and we are going to continue to look at the possibility of expanding, but right now we just got to nail because we are in the early days,” said Pichette.
“Not a hobby” could mean that Google does not intend to lose money on the venture, and is not simply spending money on a “hero” initiative that is not intended to directly sustain itself over the long term.
Contrast that with Apple’s statements some years ago that Apple TV was, in fact, a hobby, implying that commercial impact was not expected.
But “not a hobby” would unsettle other large ISPs much more if it implied Google was seriously entertaining the notion of becoming an ISP in its own right, on a bigger scale.
Those sorts of fears have been expressed in the past, about Google “becoming a telco.” But Google has become a handset supplier, on a limited scale. Google Voice does earn communications revenue. Google Docs does compete with Microsoft’s “Office” suite.
Google does operate a large global backbone network. Likewise, there are, from time to time, discussions of whether Google (or other big application providers) want to become mobile service providers.
And even at the recent Pacific Telecommunications Council meeting, at least a few attendees I spoke with did express concern that Google might in fact be considering a wider and more significant entry into either the local access or backbone transport markets.
In other words, there remains considerable unease about what Google might decide to do, in the communications business.
The concern might be overblown. But there is no doubt about what Google would prefer, and that is higher speeds for most end users and more investment in access networks by the leading ISPs to enable that.
Google’s challenge to leading ISPs is clear enough.
In a highly-competitive market, the low-cost provider tends, over time, to win. That is true with respect to large tier one telcos competing with large tier one cable operators, for example. You might argue that cable gains in high-speed access and fixed network market share provide a clear example.
Some now would argue that ISPs--both fixed network and mobile ISPs--need to match Google’s own costs, on a gigabyte per cents or gigabyte per dollar basis. How well that can be done, and if it can be done, is the question.
But Google has affected service provider thinking before. Remember several years ago when executives started to routinely say they had to “innovate at Google speed?” Doubtless, most would say no telco really is able to innovate that fast. But it might be argued that service providers do now innovate faster than before.
So it might not be unreasonable to argue that if Google continues to demonstrate new cost models for very high speed access, that service providers will respond.
Shifting to costs equivalent to Google’s costs might be a daunting prospect, but less daunting than what could happen if legacy revenue streams erode faster than new revenue replacements can be created.
It is one thing to argue that telcos, for example, need to incrementally reduce current operating costs. But that argue also hinges on a crucial assumption, namely that current revenue continues to grow on a relatively stable basis, while revenue losses from legacy products do not accelerate in a destabilizing way.
Some might argue that the risk of unexpected revenue trend deterioration is greater than most now assume. In that case, one way or the other, service providers will have to make further adjustments. That is one reason why Google hints that it might expand the Google Fiber program.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Mobile Now Shapes "Average" Internet Access Speeds
What will dramatically-higher mobile broadband and mobile data plan adoption mean for global “average” Internet access speeds? The question already is starting to matter.
By the end of 2011, total global mobile subscriptions reached nearly six billion by end 2011, corresponding to a global penetration of 86 percent, according to the International Telecommunications Union.
Growth was driven by developing countries, which accounted for more than 80 percent of the 660 million new mobile subscriptions added in 2011.
If one assumes a typical mobile connection supports lower speed than a fixed network broadband connection, rapidly growing mobile Internet access will have a huge impact on “average” access speeds.
By end 2011, there were more than one billion mobile broadband subscriptions worldwide. More important is the rate of change. Mobile broadband grew at a 40 percent annual
rate in 2011. That rate will slow over time, of course, but at such rates, the base of users doubles in less than three years.
Also, compare mobile broadband to fixed network broadband subscriptions. At the end of 2011, there were 590 million fixed broadband subscriptions worldwide. In other words, there were nearly twice as many mobile broadband users as fixed network broadband users by the end of 2011.
Furthermore, fixed network broadband growth in developed countries was slowing (a five percent increase in 2011), where developing countries continue to experience high growth (18 percent in 2011).
As you might guess, fixed network broadband penetration remains low in some regions, such as Africa and the Arab states, with 0.2 percent and two percent adoption, respectively, by the end of 2011.
Also, in 2011, 30 million fixed broadband subscriptions were added in China alone, representing about half ofthe total fixed network subscriptions added worldwide, while fixed broadband penetration reached 12 percent in China.
One should therefore assume that comparing future “average” or “typical” broadband speeds to past data will be misleading. We might already be seeing that sort of impact.
In the third quarter of 2012, the global average connection speed declined 6.8 percent to 2.8 Mbps, and the global average peak connection speed declined 1.4 percent to 15.9 Mbps, says Akamai.
That statistic likely directly reflects the growing use of mobile networks Since access from mobile devices far outstrips access from fixed network connections, globally, and since mobile network top speeds are less than fixed networks, generally, a growing volume of mobile connections will affect overall “average speed.”
In 2010, global mobile penetration was nearing 80 percent. Early in 2012, global mobile penetration reached 85 percent.
All of that means “average” statistics about broadband access speeds will have to be considered in a more nuanced way from this point forward. As “most” Internet access happens from mobile devices, the “average” connection speed, either peak or average, is going to reflect the “slower” mobile speeds, compared to fixed network connections.
By the end of 2011, total global mobile subscriptions reached nearly six billion by end 2011, corresponding to a global penetration of 86 percent, according to the International Telecommunications Union.
Growth was driven by developing countries, which accounted for more than 80 percent of the 660 million new mobile subscriptions added in 2011.
If one assumes a typical mobile connection supports lower speed than a fixed network broadband connection, rapidly growing mobile Internet access will have a huge impact on “average” access speeds.
By end 2011, there were more than one billion mobile broadband subscriptions worldwide. More important is the rate of change. Mobile broadband grew at a 40 percent annual
rate in 2011. That rate will slow over time, of course, but at such rates, the base of users doubles in less than three years.
Also, compare mobile broadband to fixed network broadband subscriptions. At the end of 2011, there were 590 million fixed broadband subscriptions worldwide. In other words, there were nearly twice as many mobile broadband users as fixed network broadband users by the end of 2011.
Furthermore, fixed network broadband growth in developed countries was slowing (a five percent increase in 2011), where developing countries continue to experience high growth (18 percent in 2011).
As you might guess, fixed network broadband penetration remains low in some regions, such as Africa and the Arab states, with 0.2 percent and two percent adoption, respectively, by the end of 2011.
Also, in 2011, 30 million fixed broadband subscriptions were added in China alone, representing about half ofthe total fixed network subscriptions added worldwide, while fixed broadband penetration reached 12 percent in China.
One should therefore assume that comparing future “average” or “typical” broadband speeds to past data will be misleading. We might already be seeing that sort of impact.
In the third quarter of 2012, the global average connection speed declined 6.8 percent to 2.8 Mbps, and the global average peak connection speed declined 1.4 percent to 15.9 Mbps, says Akamai.
That statistic likely directly reflects the growing use of mobile networks Since access from mobile devices far outstrips access from fixed network connections, globally, and since mobile network top speeds are less than fixed networks, generally, a growing volume of mobile connections will affect overall “average speed.”
In 2010, global mobile penetration was nearing 80 percent. Early in 2012, global mobile penetration reached 85 percent.
All of that means “average” statistics about broadband access speeds will have to be considered in a more nuanced way from this point forward. As “most” Internet access happens from mobile devices, the “average” connection speed, either peak or average, is going to reflect the “slower” mobile speeds, compared to fixed network connections.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Tuesday, January 22, 2013
200 Million Global LTE Subscribers in 2013, One Billion by 2016
Just three years after the technology’s original deployment, global subscribers to 4G Long Term Evolution (LTE) networks were used by more than 100 million subscribers in 2012 and will reach 200 million by the end of 2013, according to IHS iSuppli.
That represents a compound annual growth rate of about 139 percent.
That represents a compound annual growth rate of about 139 percent.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
What are the Right Metrics to Measure Service Provider Success?
“The financial community does not measure our industry correctly,” says Norman Fekrat, former VP and partner at IBM Global Business Services. And there will be consequences once analysts finally figure out that the current metrics, from the legacy voice business, do not accurately describe the actual financial results being generated by telcos globally.
The number of subscribers once was a meaningful metric. Because “subscribers” was useful, so was the concept of “churn,” reflecting a service provider’s ability to keep its customers.
These days, “revenue generating units” are reported by many service providers, because that simply makes more sense. Average revenue per user likewise made perfect sense in a long era where “subscribers” and “lines” were accurate and useful ways to measure business health.
Fekrat argues that the current metrics actually do not capture financial performance in ways that will matter as all services wind up as IP-bandwidth-based apps. In the era to come, where the fundamental network resource consumed by any app is “gigabytes,” profit will have to be measured, per service or application, in relationship to use of the network.
In a voice-centric business model, additional usage actually did not really affect “cost,” in terms of use of the network resource. That means the profit of a video entertainment service would have to be evaluated not only in terms of revenue, but also in terms of consumption of network resources.
The same would hold for voice, messaging, web surfing or any other application using the network. Part of the reason for Fekrat’s concern is that, just to keep profit margins where they currently are, assuming growing consumption of bandwidth, cost per gigabyte has to decline about 70 percent to 90 percent every three to four years.
Some of that cost reduction might already be happening, at least for buyers able to buy in some volume. Fekrat assumes wholesale capacity prices of about $4 to $5 per gigabyte. Some buyers or sellers might argue prices, on some routes, already are in the two cents to three cents per gigabyte range.
In other words, if the cost per gigabyte per service argument is valid, at least for the capacity part of the business, costs might already be falling fast enough to make operating, capital and other overhead costs more significant than network costs, at least where the core networks are concerned. Access networks might be a different matter, since traditional cost analysis might attribute as much as 90 percent of end-to-end cost to the access networks on either side of any session.
And Fekrat has one benchmark in mind: service provider network costs must, over time, match those of Google. That’s a very tall order, but wise advice, if you assume that, in a competitive market, over the long term, the lowest cost network wins.
The number of subscribers once was a meaningful metric. Because “subscribers” was useful, so was the concept of “churn,” reflecting a service provider’s ability to keep its customers.
These days, “revenue generating units” are reported by many service providers, because that simply makes more sense. Average revenue per user likewise made perfect sense in a long era where “subscribers” and “lines” were accurate and useful ways to measure business health.
Fekrat argues that the current metrics actually do not capture financial performance in ways that will matter as all services wind up as IP-bandwidth-based apps. In the era to come, where the fundamental network resource consumed by any app is “gigabytes,” profit will have to be measured, per service or application, in relationship to use of the network.
In a voice-centric business model, additional usage actually did not really affect “cost,” in terms of use of the network resource. That means the profit of a video entertainment service would have to be evaluated not only in terms of revenue, but also in terms of consumption of network resources.
The same would hold for voice, messaging, web surfing or any other application using the network. Part of the reason for Fekrat’s concern is that, just to keep profit margins where they currently are, assuming growing consumption of bandwidth, cost per gigabyte has to decline about 70 percent to 90 percent every three to four years.
Some of that cost reduction might already be happening, at least for buyers able to buy in some volume. Fekrat assumes wholesale capacity prices of about $4 to $5 per gigabyte. Some buyers or sellers might argue prices, on some routes, already are in the two cents to three cents per gigabyte range.
In other words, if the cost per gigabyte per service argument is valid, at least for the capacity part of the business, costs might already be falling fast enough to make operating, capital and other overhead costs more significant than network costs, at least where the core networks are concerned. Access networks might be a different matter, since traditional cost analysis might attribute as much as 90 percent of end-to-end cost to the access networks on either side of any session.
And Fekrat has one benchmark in mind: service provider network costs must, over time, match those of Google. That’s a very tall order, but wise advice, if you assume that, in a competitive market, over the long term, the lowest cost network wins.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
"Smart M2M" and "Smart ARPU"
It is by no means clear what a “smart” pipe strategy really is, compared to a “dumb pipe” or capacity play, in the retail telecommunications business. To be sure, it is obvious why communications executives find the term “dumb pipe” distasteful, as it implies “low value” or “low margin” or “low gross revenue.”
In truth, virtually all “smart pipe” strategies are built on largely “dumb pipe (best effort Internet access).” In that sense, all retail strategies now are a mix of “dumb pipe (best effort Internet access)” and applications (“smart pipe”). Any service provider selling video entertainment services or voice, for example, by definition is selling an application running on top of the pipe.
Some might say the National Broadband Network in Australia, or any other wholesale-only network services business is akin to a “dumb network” business strategy. But even there, when a wholesale voice service is sold, it is an application running on the network, not a true “dumb pipe” service.
That isn’t going to stop all sorts of service providers from selling or using “smart” as part of their retail branding strategy. Nor, in truth, is the notion incorrect. The point is that service providers all over the world are seriously engaged in a pursuit of new applications to create and sell that incorporate communications features enabled by their networks.
Telefónica Digital, for example, touts “Smart M2M,” a web-based platform for machine-to-machine (M2M) communications. How precisely any active mobile device could provide communications for a sensor function, without being a “smart” activity, is a subtle matter.
“Smart M2M” provides real time monitoring of traffic type, volume and current consumption, technical supervision of lines (maps of connected devices, advanced diagnostics) and localization, Telefónica Digital says.
The service includes fraud detection functionalities, including the ability to restrict communications between a list of given devices or the possibility to establish traffic caps.
NTT Docomo, for its part, now talks about “smart ARPU.”
Minoru Etoh, managing director with NTT Docomo, says Docomo now refers to new value added services including music and video on demand as “smart ARPU (average revenue per user),” which now accounts for about 10 percent of NTT Docomo revenue.
There already is only so much revenue service providers can earn from end users buying mobile broadband, said Etoh. Call that dumb pipe, best effort Internet access. But Docomo is pinning its future revenue growth on “smart ARPU” applications and services that are built on the assumption a customer is buying the “dumb” access services.
In truth, virtually all “smart pipe” strategies are built on largely “dumb pipe (best effort Internet access).” In that sense, all retail strategies now are a mix of “dumb pipe (best effort Internet access)” and applications (“smart pipe”). Any service provider selling video entertainment services or voice, for example, by definition is selling an application running on top of the pipe.
Some might say the National Broadband Network in Australia, or any other wholesale-only network services business is akin to a “dumb network” business strategy. But even there, when a wholesale voice service is sold, it is an application running on the network, not a true “dumb pipe” service.
That isn’t going to stop all sorts of service providers from selling or using “smart” as part of their retail branding strategy. Nor, in truth, is the notion incorrect. The point is that service providers all over the world are seriously engaged in a pursuit of new applications to create and sell that incorporate communications features enabled by their networks.
Telefónica Digital, for example, touts “Smart M2M,” a web-based platform for machine-to-machine (M2M) communications. How precisely any active mobile device could provide communications for a sensor function, without being a “smart” activity, is a subtle matter.
“Smart M2M” provides real time monitoring of traffic type, volume and current consumption, technical supervision of lines (maps of connected devices, advanced diagnostics) and localization, Telefónica Digital says.
The service includes fraud detection functionalities, including the ability to restrict communications between a list of given devices or the possibility to establish traffic caps.
NTT Docomo, for its part, now talks about “smart ARPU.”
Minoru Etoh, managing director with NTT Docomo, says Docomo now refers to new value added services including music and video on demand as “smart ARPU (average revenue per user),” which now accounts for about 10 percent of NTT Docomo revenue.
There already is only so much revenue service providers can earn from end users buying mobile broadband, said Etoh. Call that dumb pipe, best effort Internet access. But Docomo is pinning its future revenue growth on “smart ARPU” applications and services that are built on the assumption a customer is buying the “dumb” access services.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Google Extends Olive Branch to French Publishers
Google has offered French publishers about 50 million euros for the right to index their content. The problem Google (and other search engines) face is that French government is threatening to pass new laws requiring such payments.
That news comes as Google reportedly also is paying Orange to "terminate" or deliver its content to Orange end users. That deal, in all likelihood, is not what it seems to be. Google operates one of the largest IP networks in the world, so that specific deal probably is not a payment to Orange to deliver traffic, but only a traditional carrier-to-carrier termination agreement.
The deal Google offered to the publishers includes the purchase of advertising space from Google, on paper and digital media, a commercial collaboration between publishers and search engine and the use by the publishers of Google's advertising platform AdSense.
Media owners rejected the offer, saying they wanted annual income of about 70 to 100 million euros.
The pressure from French publishers shows a possible crack in the traditional business relationship between some large application providers and some large media and telecom interests. Both of those industries want more of a share of Internet ecosystem revenue, and such fees as Google supposedly is paying are one way of achieving those objectives.
Policy issues aside, the French media issue is significant, as Google now increasingly is faced with a choice: create new business and commercial deals with business partners, even when it would, in principle, rather not do so, or have regulators and legislators potentially force it to do so anyway, on terms Google will have no control over, or ability to influence.
That news comes as Google reportedly also is paying Orange to "terminate" or deliver its content to Orange end users. That deal, in all likelihood, is not what it seems to be. Google operates one of the largest IP networks in the world, so that specific deal probably is not a payment to Orange to deliver traffic, but only a traditional carrier-to-carrier termination agreement.
The deal Google offered to the publishers includes the purchase of advertising space from Google, on paper and digital media, a commercial collaboration between publishers and search engine and the use by the publishers of Google's advertising platform AdSense.
Media owners rejected the offer, saying they wanted annual income of about 70 to 100 million euros.
The pressure from French publishers shows a possible crack in the traditional business relationship between some large application providers and some large media and telecom interests. Both of those industries want more of a share of Internet ecosystem revenue, and such fees as Google supposedly is paying are one way of achieving those objectives.
Policy issues aside, the French media issue is significant, as Google now increasingly is faced with a choice: create new business and commercial deals with business partners, even when it would, in principle, rather not do so, or have regulators and legislators potentially force it to do so anyway, on terms Google will have no control over, or ability to influence.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Subscribe to:
Posts (Atom)
Zoom Wants to Become a "Digital Twin Equipped With Your Institutional Knowledge"
Perplexity and OpenAI hope to use artificial intelligence to challenge Google for search leadership. So Zoom says it will use AI to challen...
-
We have all repeatedly seen comparisons of equity value of hyperscale app providers compared to the value of connectivity providers, which s...
-
It really is surprising how often a Pareto distribution--the “80/20 rule--appears in business life, or in life, generally. Basically, the...
-
One recurring issue with forecasts of multi-access edge computing is that it is easier to make predictions about cost than revenue and infra...