Sunday, November 1, 2009
What Has Changed Since 2000
A few statistics will illustrate just how much has changed in the global telecom business since 2000. Prior to the turn of the century, most lines in service used wires and carried voice.
By 2007, 74 percent of all lines in service used wireless access or carried data, says the Organization for Economic Cooperation and Development.
Mobile alone in 2007 accounted for 61 percent of all subscriptions while standard phone lines have dropped to 26 percent. And the change has come swiftly: in just seven years.
Mobile revenues now account for nearly half of all telecommunication revenues—41 percent in 2007—up from 22 percent 10 years earlier.
Along with the change in access methods and applications is the sheer number of connections. The total number of fixed, mobile and broadband subscriptions in the member nations of the OECD grew to 1.6 billion in 2007, compared to a population within the OECD nations of just over one billion inhabitants.
To put that in perspective, consider that there were seven access paths in use in 2007 for every access path in use in 1980. That includes broadband, wireless and voice connections.
To put those figures in even greater perspective, consider that the percentage of household budgets devoted to communication expenses has climbed only slightly over the last 10 years. In most OECD countries, households generally spend about 2.2 percent and 2.5 percent of household income on communications, year in and year out, though one can note a slow rise since 1998.
The big exception is Japan, where household spending on communications is close to seven percent of household income. That might be something to keep in mind when making cross-national comparisons. It is true that Japan has very-fast broadband and has pioneered any number of mobile application innovations.
But Japanese households spend very close to three times as much as U.S. households on their overall communications. That’s worth keeping in mind. It always is difficult to make meaningful comparisons between nations.
Generally speaking, though, OECD consumers have added seven new connections for every existing connection in 1980, while spending about the same percentage of their incomes on those services. That’s an obvious example of an explosion of productivity.
Much has changed in the Internet access realm as well. Broadband is now the dominant fixed access method in all OECD countries. In 2005, dial-up connections still accounted for 40 percent of fixed Internet connections but just two years later that percentage had fallen to 10 percent.
Also, while many criticize the industry for retarding innovation and behaving as “nasty monopolists,” prices have tended to fall for virtually all communication services on all platforms.
“Over the previous 18 years, residential users saw the real price of residential fixed-linephone service fall roughly one percent per year while business prices fell 2.5 percent per year,” the OECD says.
Mobile subscribers also benefitted from declining prices between 2006 and 2008. The average price of OECD “mobile baskets,” representing a number of calls and messages per year that normalizes features and prices, fell by 21 percent for low usage, 28 percent for medium usage and 32 percent for the heaviest users over the two-year period.
User voice behavior also has changed. The number of minutes of communication per mobile phone is increasing while the minutes on fixed networks are decreasing. In other words, the mobile is becoming for most people the primary voice device while the landline is a backup.
Some might argue that ultimately has implications for pricing. In some real ways, the mobile is the “premium” device and a landline represents a supplemental service. That probably means the value is such that consumers ultimately will think it should be priced as a backup service.
Data between 2005 and 2007 suggest people are making fewer domestic calls on the fixed network in most countries, OECD says. When people do use fixed networks they are increasingly making calls to users of mobile phones.
This trend is well highlighted by Austria where the introduction of flat-rate voice telephony on mobile networks has shifted calls away from the fixed-line network. Voice traffic on Telekom Austria’s fixed network fell 13.3 percent in 2007 as a result of the shift to mobile
communications.
There was an OECD monthly average of 272 minutes of outgoing calls on fixed line telephones in 2007. This is down 32 minutes per month from 2005.
But there was an interesting landline rebound trend appearing recently in a number of OECD countries.
The number of PSTN minutes per line declined until 2005 when the numbers started rising again. For example, French minutes per PSTN line fell until 2004 when they started to increase.
One explanation is the shift in France to flat-rate national calls offered by a number of carriers. That suggests U.S. landline voice providers might stem some of the traffic erosion by offering aggressive, flat-rate, all-distance services within the domestic market, as VoIP providers generally do.
On the mobile side, the OECD average number of outgoing minutes of completed calls on mobile networks was 220 minutes per month in 2007, up 56 percent from 2005.
Subscribers in the United States make far more outgoing calls on mobile phones each month than any other country in the OECD. The average number of minutes per mobile subscription was 443 in 2007, more than double the OECD average. One might argue that is because of the reasonable cost of calling great distances. In Europe, many calls that would be domestic in the United States are international calls.
Broadband prices have fallen as well over the same time. OECD broadband prices declined significantly over the previous three years. Prices declined an average of 14 percent per year for DSL and 15 percent for cable between 2005 and 2008.
The average price of a low-speed connection (2 megabits per second or less downstream) was $32 per month in September 2008. At the other end of the scale, broadband connections with download speeds advertised as faster than 30 megabits per second averaged $45 per month.
Despite the falling price-per-unit trends, telecommunications services, about a trillion dollar market in the OECD, continues to grow at about a six-percent annual rate. That remains to be tested as we finish 2009, but there is reasonable historic precedent for continued growth, though perhaps not at a six-percent rate.
Regarding voice and new mobile and data services, we might as well note that landline voice appears to be a product like any other. That is to say, like any other product, it has a product life cycle.
To be specific, wireline voice looks like a product in its declining phase. Optical fiber-based broadband looks like a product earlier in its cycle, with 56 percent compound annual growth since 2005.
Digital subscriber line and cable modem services likely are further along their curves. DSL grew at a compounded rate of 21 percent per year while cable modem service grew at 18 percent rates between 2005 and 2007.
Mobile voice markets grew by 10 percent each year since 2005 but may be nearing saturation levels in a number of OECD markets. Mobile broadband clearly is early in its product life cycle.
Analog lines, used for voice, facsimile and dial-up Internet access, also seem to be in decline. The number of analog subscribers fell by 34 million between 2005 and 2007.
The decline of Internet dial-up services also means that many households no longer need a second analog line. The same might be true for in-home fax machines. And many additional lines once used by teenagers now have been replaced by mobiles.
Finally, the number of “mobile-only” subscribers has increased as well.
The penetration rate for fixed telephone lines (analog and ISDN) in 2007 was 41 subscribers per 100 inhabitants, which was less than the penetration rate ten years earlier.
Overall, the penetration rate rose from 43 percent in 1996 to a maximum of 47 percent in 2000, only to decline again to 41 percent in 2007. The year 2000 appears to be the turning point in the technological life cycle of fixed-line telephony.
Canada had the highest fixed-line penetration in 2007 with a penetration rate of 54 subscribers for every 100 inhabitants. Sweden, Luxembourg and the United States all
had penetration rates greater than 50 per 100 inhabitants. Mexico, the Slovak Republic and Poland had the lowest penetration rates in 2007.
There’s an interesting observation we can make about those figures. Nobody seems to argue that the United States has a big problem with voice service availability. In fact, availability is not the issue: consumer demand is the issue. One doesn’t hear people complaining about the lack of voice availability in Canada or Sweden. But penetration is in the 50 percent range, per capita.
Nearly all Internet users in the United States use broadband, not dial-up. And yet broadband penetration might well be higher than voice penetration, on that score. People who want the product generally buy it.
That said, there are some methodological issues here. “Per capita” measures might not make as much sense, as a comparative tool, when median household sizes vary. Adoption by households, adjusted to include people who use the Internet only at work or at public locations, or using mobiles, would be better.
Broadband adoption, by people who actually use the Internet, might make the most sense of all. Broadband is a product like any other. Not every consumer values every product to the same degree.
DSL network coverage is greater than 90 percent in 22 of the 30 OECD countries. Belgium, Korea, Luxembourg and the Netherlands report 100 percent.
Cable coverage is extensive in some countries such as the United States (96 percent) and Luxembourg (70 percent), but non-existent in others such as Greece, Iceland and Italy.
An analysis which followed the evolution of broadband plans over four years shows that speeds increased by 28 percent for DSL and 72 percent for cable on average between 2007 and 2008.
A survey of 613 broadband offers covering all OECD countries shows the average advertised speed grew between 2007 and 2008 across all platforms except for fiber. The average advertised DSL speed increased 25 percent from 9.3 Mbps in 2007 to 11.5 Mbps in 2008.
Advertised speed of course is not user-experienced speed at all times of day. Still, it offers some measure of changes in the product.
The average advertised fiber speed actually declined between 2007 and 2008 as operators introduced new entry-level offers at speeds below 100 Mbps.
For example, Dansk Broadband in Denmark offers symmetric broadband offers over fiber at speeds between 512 kbps and 100 Mbps.
The average fixed wireless offer in 2008 was 3 Mbps, up from 1.8 Mbps just a year earlier.
Fixed wireless speeds grew by 64 percent but remain only one-quarter of the average advertised speeds of DSL providers. The average cable offer is five times faster.
There are some insights about mobile broadband in the OECD’s analysis. The amount of data traffic carried over mobile networks remains small in relation to other broadband data networks.
For example, Telstra in Australia reported in a 2008 investor briefing that data consumption increased from 100 kilobytes per month per user in 2007 to 250 kilobytes in 2008. Compare that to the gigabytes consumed on landline connections.
Data from the Netherlands also show relatively low data traffic in the first half of 2008.
Between January and June 2008, Dutch mobile broadband subscribers downloaded 358 gigabytes over mobile networks.
It is possible to calculate an estimate of mobile data traffic per 3G subscriber per month in the Netherlands by making a few assumptions. If the ratio of 3G to total mobile subscriptions in the Netherlands is equivalent to the OECD average of 18 percent, then the average amount of data traffic per 3G subscription per month in the Netherlands works out to be only 18 kilobytes per month.
Of 52 mobile broadband packages evaluated in September 2008, the average headline speed was 2.5 Mbps. Subscribers to these plans were allowed an average of 4.5 gigabytes of data traffic per month.
Much has changed in the global telecommunications business in just seven years. Landline voice might still provide the revenue mainstay, but it is a product in the declining stages of its life cycle.
Even mobile voice, DSL and cable modem service are products at something like the peak of their cycles.
Mobile broadband and optical fiber access are early in their product life cycles. Mobility is becoming the preferred way of consuming voice communications.
That’s an awful lot of change in just seven years. And we haven’t even discussed VoIP, over-the-top applications, content or video.
Labels:
business model,
marketing,
telco strategy
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
How to Save Newspapers, Maybe
Newspaper readership has been declining for decades. Proposals to have the government subsidize them seem not only dangerous (the press is supposed to be a watchdog for the people against the power of the government) but stupid. Should we subsidize the telegraph because everybody uses telephones, mobiles, IM, SMS, microblogging and blogging to send messages?
Distribution channels and formats change over time. So does media. I don't know whether this is the answer. But it's interesting.
Labels:
digital media,
media use
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
The Way to Deal with an Outage
Communications network outages occur more often than most users realize. When they do, the only way to respond, beyond restoration as rapidly as possible, is apologize. Quite often, perhaps most of the time, it also helps to explain what happened.
Junction Networks, for example, had an unexpected outage Oct. 26, 2009 for about an hour and a half, and the company's apology and explanation is a good example of what to do when the inevitable outage does occur. First, apologize.
"We do sincerely apologize for this service interruption. We know that you have many choices for your phone service, and we deeply appreciate your patience and understanding during yesterday's interruption of service. Below are the full details of the service issue."
Then remind users where they can get information if an outage ever occurs again.
"One of the first things we do when a service issue occurs is update our Network Alert Blog and Twitter page with as much information as we have at that time. We then post comments to that original post as we learn more. Our Network Alert blog is here: http://www.junctionnetworks.com/blog/category/network-alerts"
"Our Twitter account is: http://www.twitter.com/onsip."
Junction Networks then provides a detailed description of its normal maintenance activities, which can cause "planned outages" with an intentional shift to backup systems.
"As a rule, Junction Networks maintains three different types of maintenance windows:
1.) Weekend - early morning: The maintenance performed will produce a service disruption and could affect multiple systems.
2.) Weekday - early morning: The maintenance performed may produce a service disruption, but is isolated to a single system.
3.) Intra-day: The work performed should not affect our customers.
All maintenance, even that which is known to cause a service disruption, is not expected to cause a disruption for more than a few fractions of a second. For anything that would cause a more serious disruption (one second or more), backup services are swapped in to take the place of the maintenance system."
The company then explains why the specific Oct. 26 outage happened, in some detail, and then the remedies it applied.
Nobody likes outages, but they are a fact of life. If you think about it, there is a very simple reason. Consider today's electronic devices, designed to work with only minutes to hours to several days worth of "outages" each year. If you've ever had to reboot a device, that's an outage. If you've ever had software "hang," requiring a reboot, that's an outage.
Now imagine the number of normally reliable devices that have to be connected in series to complete any point-to-point communications link. That's the number of applications running, on the servers, switches, routers and gateways, on the active opto-electronics in all networks that must be connected for any single point-to-point session to occur.
Don't forget the power supplies, power grid, air conditioners and potential accidents that can take a session out. If a backhaul cuts an optical line, you get an outage. If a car knocks down a telephone pole, you can get an outage.
Now remember your mathematics. Any number less than "one," when multiplied by any other number less than "one," necessarily results in a number that is smaller than the original quantity. In other words, as one concatenates many devices, each individually quite reliable, the reliability or availability of the whole system gets worse.
A single device with 99-percent reliability is expected to fail 3 days, 15 hours and 40 minutes every year. But that's just one device. If any session has 50 possible devices in series, each with that same 99-percent reliability, the system as a whole is reliable only as the multiplied availabilities of each discrete device.
In other words, you have to multiple a number less than "one" by 49 other numbers, each less than "one," to determine overall system reliability.
As an example, consider a system of just 12 devices, each 99.99 percent reliable, and expected to fail about 52 minutes, 36 seconds each year. The whole network would then be expected to fail about 10.5 hours each year.
Networks with less reliability than 99.99 percent or with more discrete elements will fail for longer periods of time.
The point is that outages can be minimized, but not prevented entirely. Knowing that, one might as well have a process in place for the times when service is disrupted.
Junction Networks, for example, had an unexpected outage Oct. 26, 2009 for about an hour and a half, and the company's apology and explanation is a good example of what to do when the inevitable outage does occur. First, apologize.
"We do sincerely apologize for this service interruption. We know that you have many choices for your phone service, and we deeply appreciate your patience and understanding during yesterday's interruption of service. Below are the full details of the service issue."
Then remind users where they can get information if an outage ever occurs again.
"One of the first things we do when a service issue occurs is update our Network Alert Blog and Twitter page with as much information as we have at that time. We then post comments to that original post as we learn more. Our Network Alert blog is here: http://www.junctionnetworks.com/blog/category/network-alerts"
"Our Twitter account is: http://www.twitter.com/onsip."
Junction Networks then provides a detailed description of its normal maintenance activities, which can cause "planned outages" with an intentional shift to backup systems.
"As a rule, Junction Networks maintains three different types of maintenance windows:
1.) Weekend - early morning: The maintenance performed will produce a service disruption and could affect multiple systems.
2.) Weekday - early morning: The maintenance performed may produce a service disruption, but is isolated to a single system.
3.) Intra-day: The work performed should not affect our customers.
All maintenance, even that which is known to cause a service disruption, is not expected to cause a disruption for more than a few fractions of a second. For anything that would cause a more serious disruption (one second or more), backup services are swapped in to take the place of the maintenance system."
The company then explains why the specific Oct. 26 outage happened, in some detail, and then the remedies it applied.
Nobody likes outages, but they are a fact of life. If you think about it, there is a very simple reason. Consider today's electronic devices, designed to work with only minutes to hours to several days worth of "outages" each year. If you've ever had to reboot a device, that's an outage. If you've ever had software "hang," requiring a reboot, that's an outage.
Now imagine the number of normally reliable devices that have to be connected in series to complete any point-to-point communications link. That's the number of applications running, on the servers, switches, routers and gateways, on the active opto-electronics in all networks that must be connected for any single point-to-point session to occur.
Don't forget the power supplies, power grid, air conditioners and potential accidents that can take a session out. If a backhaul cuts an optical line, you get an outage. If a car knocks down a telephone pole, you can get an outage.
Now remember your mathematics. Any number less than "one," when multiplied by any other number less than "one," necessarily results in a number that is smaller than the original quantity. In other words, as one concatenates many devices, each individually quite reliable, the reliability or availability of the whole system gets worse.
A single device with 99-percent reliability is expected to fail 3 days, 15 hours and 40 minutes every year. But that's just one device. If any session has 50 possible devices in series, each with that same 99-percent reliability, the system as a whole is reliable only as the multiplied availabilities of each discrete device.
In other words, you have to multiple a number less than "one" by 49 other numbers, each less than "one," to determine overall system reliability.
As an example, consider a system of just 12 devices, each 99.99 percent reliable, and expected to fail about 52 minutes, 36 seconds each year. The whole network would then be expected to fail about 10.5 hours each year.
Networks with less reliability than 99.99 percent or with more discrete elements will fail for longer periods of time.
The point is that outages can be minimized, but not prevented entirely. Knowing that, one might as well have a process in place for the times when service is disrupted.
Labels:
outage
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Palm Pre, iPhone, MyTouch, Droid Compared
Here's one way of comparing some of the latest smartphones, put together by BillShrink.com. One thought comes to mind, when looking at "unsubsidized" cost of these devices.
(Click image for larger view.)
Some users do not apparently like contracts, even if those contracts provide lower handset prices. They should be able to buy their handsets "unlocked" if they choose.
But lots of users, contemplating smartphone prices almost the same as notebooks and PCs, might well prefer the contracts, to get lower handset prices, just as most people say they "hate" commercials but will put up with a certain amount of commercials if it means "free" content access.
In a world that is "one size fits none" rather than "one size fits all," it seems to run counter to consumer preferences to ban any lawful commercial offer. Let people make their own decisions.
On the other hand, if you want to see a dramatic deceleration in smartphone adoption, with all the application innovation that is coming along with those devices, watch what happens if contracts that subsidize handset costs are outlawed.
(Click image for larger view.)
Some users do not apparently like contracts, even if those contracts provide lower handset prices. They should be able to buy their handsets "unlocked" if they choose.
But lots of users, contemplating smartphone prices almost the same as notebooks and PCs, might well prefer the contracts, to get lower handset prices, just as most people say they "hate" commercials but will put up with a certain amount of commercials if it means "free" content access.
In a world that is "one size fits none" rather than "one size fits all," it seems to run counter to consumer preferences to ban any lawful commercial offer. Let people make their own decisions.
On the other hand, if you want to see a dramatic deceleration in smartphone adoption, with all the application innovation that is coming along with those devices, watch what happens if contracts that subsidize handset costs are outlawed.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Saturday, October 31, 2009
Google Voice has 1.4 Million Users
Google Voice has 1.419 million users, some 570,000 of which use it seven days a week, Google says, in information Google apparently released accidentally in a letter to the Federal Communications Commission and discovered by Business Week before the information was discovered and removed.
The early version of the documents also suggested Google has plans to take Google Voice global. Google apparently said it already has signed contracts with a number of international service providers.
The early version of the documents also suggested Google has plans to take Google Voice global. Google apparently said it already has signed contracts with a number of international service providers.
Labels:
consumer VoIP,
Google,
Google Voice,
unified communications
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
How Do You Measure the Value of Something That Has No Price?
Global end user spending on communications services (voice and data, not entertainment video) runs about $1.8 trillion a year or so, one can extrapolate from the most-recent International Telecommunicatons Union statistics.
Fixed line voice probably sits at about the $740 billion range in 2009.
Infonetics Research says VoIP services bring in $21 billion for service providers in the first half, so assume an annual total of $42 billion. Assume 16 percent of those revenues are for trunking services of one sort or another and voice revenues might hit $35 billion or so for the full year.
That suggests VoIP services represent about 4.7 percent of total global voice revenues in 2009.
The point is that VoIP remains a relatively small portion of global voice revenues. But the situation is more complicated than simply how VoIP stacks up as a revenue driver. The larger problem with voice revenues, as everyone agrees, is that it is trending towards becoming an "application," not a service. That means it will sometimes be provided "at no incremental cost," or at "very low incremental cost."
The value VoIP represents cannot be strictly measured using "revenue" metrics, anymore than the value of email or instant messaging or presence can be measured by revenue metrics. Probably all that anyone can say with some assurance is that the value VoIP represents is greater than five percent of the total value of voice communications, as many sessions occur on a "non-charged" basis.
Many years ago, consumers got access to email in one of two ways. They got email access from their employers, or they bought dial-up Internet access and got their email from their ISPs. In neither case has it, or is, possible to calculate the economic value of email, as the measurable "product" for a consumer was the value of the dial-up Internet connection.
Business value is even harder to calculate, as organizations can buy software and hardware to host their own email, and then buy access connections that support any number of applications, without any specific fee required to host email services.
The larger point is that, in future years, the service revenue attributable directly to voice services will be a number that might remain flat, might grow or might shrink. If voice revenues ultimately shrink, as they might in many markets, or if VoIP replaces TDM versions of voice, that will not necessarily mean that people are talking less, or that the value ascribed to voice is less.
It simply will mean that the value is only indirectly measurable. Only one thing can be said for sure. Markets whose products cannot be directly measured will not be measured. The first sign of this is the increasing use of metrics such as "revenue generating units" or "services per customer" or "average revenue per user."
At some point, though it might still be a measurable quantity, the value of voice services will be only partially represented by "service" revenue. It's tough to measure the value of something that has no specific "incremental cost."
So what will market researchers and agencies do? What they have done before: they will measure the value of some associated product that does have a market price. They will measure the value of purchased access connections, rather than particular applications, much as one could measure ISP access subscriptions, but not the value of email.
Fixed line voice probably sits at about the $740 billion range in 2009.
Infonetics Research says VoIP services bring in $21 billion for service providers in the first half, so assume an annual total of $42 billion. Assume 16 percent of those revenues are for trunking services of one sort or another and voice revenues might hit $35 billion or so for the full year.
That suggests VoIP services represent about 4.7 percent of total global voice revenues in 2009.
The point is that VoIP remains a relatively small portion of global voice revenues. But the situation is more complicated than simply how VoIP stacks up as a revenue driver. The larger problem with voice revenues, as everyone agrees, is that it is trending towards becoming an "application," not a service. That means it will sometimes be provided "at no incremental cost," or at "very low incremental cost."
The value VoIP represents cannot be strictly measured using "revenue" metrics, anymore than the value of email or instant messaging or presence can be measured by revenue metrics. Probably all that anyone can say with some assurance is that the value VoIP represents is greater than five percent of the total value of voice communications, as many sessions occur on a "non-charged" basis.
Many years ago, consumers got access to email in one of two ways. They got email access from their employers, or they bought dial-up Internet access and got their email from their ISPs. In neither case has it, or is, possible to calculate the economic value of email, as the measurable "product" for a consumer was the value of the dial-up Internet connection.
Business value is even harder to calculate, as organizations can buy software and hardware to host their own email, and then buy access connections that support any number of applications, without any specific fee required to host email services.
The larger point is that, in future years, the service revenue attributable directly to voice services will be a number that might remain flat, might grow or might shrink. If voice revenues ultimately shrink, as they might in many markets, or if VoIP replaces TDM versions of voice, that will not necessarily mean that people are talking less, or that the value ascribed to voice is less.
It simply will mean that the value is only indirectly measurable. Only one thing can be said for sure. Markets whose products cannot be directly measured will not be measured. The first sign of this is the increasing use of metrics such as "revenue generating units" or "services per customer" or "average revenue per user."
At some point, though it might still be a measurable quantity, the value of voice services will be only partially represented by "service" revenue. It's tough to measure the value of something that has no specific "incremental cost."
So what will market researchers and agencies do? What they have done before: they will measure the value of some associated product that does have a market price. They will measure the value of purchased access connections, rather than particular applications, much as one could measure ISP access subscriptions, but not the value of email.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Will Moore's Law "Save" Bandwidth Providers, ISPs?
In the personal computer business there is an underlying assumption that whatever problems one faces, Moore's Law will provide the answer. Whatever challenges one faces, the assumption generally is that if one simply waits 18 months, twice the processing power or memory will be available at the same price.
For a business where processing power and memory actually will solve most problems, that is partly to largely correct.
For any business where the majority or almost all cost has nothing to do with the prices or capabilities of semiconductors, Moore's Law helps, but does solve the problem of continually-growing bandwidth demand and continually-decreasing revenue-per-bit that can be earned for supplying higher bandwidth.
That is among the fundamental problems network transport and access providers face. And Moore's Law is not going to solve the problem of increasing bandwidth consumption, says Jim Theodoras, ADVA Optical director of technical marketing.
Simply put, most of the cost of increased network throughput is not caused by the prices of underlying silicon. In fact, network architectures, protocols and operating costs arguably are the key cost drivers these days, at least in the core of the network.
The answer to the problem of "more bandwidth" is partly "bigger pipes and routers." There is some truth that notion, but not complete truth. As bandwidth continues to grow, there is some point at which the "protocols can't keep up, even if you have unlimited numbers of routers," says Theodoras.
The cost drivers lie in bigger problems such as network architecture, routing, backhaul, routing protocols and personnel costs, he says. One example is that there often is excess and redundant gear in a core network that simply is not being used efficiently. In many cases, core routers only run at 10 percent of their capacity, for example. Improving throughput up to 80 percent or 100 percent offers potentially an order of magnitude better performance from the same equipment.
Likewise, automated provisioning tools can reduce provisioning time by 90 percent or more, he says. And since "time is money," operating cost for some automated operations also can be cut by an order of magnitude.
The point is that Moore's Law, by itself, will not provide the solutions networks require as they keep scaling bandwidth under conditions where revenue does not grow linearly with the new capacity.
For a business where processing power and memory actually will solve most problems, that is partly to largely correct.
For any business where the majority or almost all cost has nothing to do with the prices or capabilities of semiconductors, Moore's Law helps, but does solve the problem of continually-growing bandwidth demand and continually-decreasing revenue-per-bit that can be earned for supplying higher bandwidth.
That is among the fundamental problems network transport and access providers face. And Moore's Law is not going to solve the problem of increasing bandwidth consumption, says Jim Theodoras, ADVA Optical director of technical marketing.
Simply put, most of the cost of increased network throughput is not caused by the prices of underlying silicon. In fact, network architectures, protocols and operating costs arguably are the key cost drivers these days, at least in the core of the network.
The answer to the problem of "more bandwidth" is partly "bigger pipes and routers." There is some truth that notion, but not complete truth. As bandwidth continues to grow, there is some point at which the "protocols can't keep up, even if you have unlimited numbers of routers," says Theodoras.
The cost drivers lie in bigger problems such as network architecture, routing, backhaul, routing protocols and personnel costs, he says. One example is that there often is excess and redundant gear in a core network that simply is not being used efficiently. In many cases, core routers only run at 10 percent of their capacity, for example. Improving throughput up to 80 percent or 100 percent offers potentially an order of magnitude better performance from the same equipment.
Likewise, automated provisioning tools can reduce provisioning time by 90 percent or more, he says. And since "time is money," operating cost for some automated operations also can be cut by an order of magnitude.
The point is that Moore's Law, by itself, will not provide the solutions networks require as they keep scaling bandwidth under conditions where revenue does not grow linearly with the new capacity.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Subscribe to:
Posts (Atom)
Directv-Dish Merger Fails
Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...
-
We have all repeatedly seen comparisons of equity value of hyperscale app providers compared to the value of connectivity providers, which s...
-
It really is surprising how often a Pareto distribution--the “80/20 rule--appears in business life, or in life, generally. Basically, the...
-
One recurring issue with forecasts of multi-access edge computing is that it is easier to make predictions about cost than revenue and infra...