Sunday, November 1, 2009
How to Save Newspapers, Maybe
Newspaper readership has been declining for decades. Proposals to have the government subsidize them seem not only dangerous (the press is supposed to be a watchdog for the people against the power of the government) but stupid. Should we subsidize the telegraph because everybody uses telephones, mobiles, IM, SMS, microblogging and blogging to send messages?
Distribution channels and formats change over time. So does media. I don't know whether this is the answer. But it's interesting.
Labels:
digital media,
media use
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
The Way to Deal with an Outage
Communications network outages occur more often than most users realize. When they do, the only way to respond, beyond restoration as rapidly as possible, is apologize. Quite often, perhaps most of the time, it also helps to explain what happened.
Junction Networks, for example, had an unexpected outage Oct. 26, 2009 for about an hour and a half, and the company's apology and explanation is a good example of what to do when the inevitable outage does occur. First, apologize.
"We do sincerely apologize for this service interruption. We know that you have many choices for your phone service, and we deeply appreciate your patience and understanding during yesterday's interruption of service. Below are the full details of the service issue."
Then remind users where they can get information if an outage ever occurs again.
"One of the first things we do when a service issue occurs is update our Network Alert Blog and Twitter page with as much information as we have at that time. We then post comments to that original post as we learn more. Our Network Alert blog is here: http://www.junctionnetworks.com/blog/category/network-alerts"
"Our Twitter account is: http://www.twitter.com/onsip."
Junction Networks then provides a detailed description of its normal maintenance activities, which can cause "planned outages" with an intentional shift to backup systems.
"As a rule, Junction Networks maintains three different types of maintenance windows:
1.) Weekend - early morning: The maintenance performed will produce a service disruption and could affect multiple systems.
2.) Weekday - early morning: The maintenance performed may produce a service disruption, but is isolated to a single system.
3.) Intra-day: The work performed should not affect our customers.
All maintenance, even that which is known to cause a service disruption, is not expected to cause a disruption for more than a few fractions of a second. For anything that would cause a more serious disruption (one second or more), backup services are swapped in to take the place of the maintenance system."
The company then explains why the specific Oct. 26 outage happened, in some detail, and then the remedies it applied.
Nobody likes outages, but they are a fact of life. If you think about it, there is a very simple reason. Consider today's electronic devices, designed to work with only minutes to hours to several days worth of "outages" each year. If you've ever had to reboot a device, that's an outage. If you've ever had software "hang," requiring a reboot, that's an outage.
Now imagine the number of normally reliable devices that have to be connected in series to complete any point-to-point communications link. That's the number of applications running, on the servers, switches, routers and gateways, on the active opto-electronics in all networks that must be connected for any single point-to-point session to occur.
Don't forget the power supplies, power grid, air conditioners and potential accidents that can take a session out. If a backhaul cuts an optical line, you get an outage. If a car knocks down a telephone pole, you can get an outage.
Now remember your mathematics. Any number less than "one," when multiplied by any other number less than "one," necessarily results in a number that is smaller than the original quantity. In other words, as one concatenates many devices, each individually quite reliable, the reliability or availability of the whole system gets worse.
A single device with 99-percent reliability is expected to fail 3 days, 15 hours and 40 minutes every year. But that's just one device. If any session has 50 possible devices in series, each with that same 99-percent reliability, the system as a whole is reliable only as the multiplied availabilities of each discrete device.
In other words, you have to multiple a number less than "one" by 49 other numbers, each less than "one," to determine overall system reliability.
As an example, consider a system of just 12 devices, each 99.99 percent reliable, and expected to fail about 52 minutes, 36 seconds each year. The whole network would then be expected to fail about 10.5 hours each year.
Networks with less reliability than 99.99 percent or with more discrete elements will fail for longer periods of time.
The point is that outages can be minimized, but not prevented entirely. Knowing that, one might as well have a process in place for the times when service is disrupted.
Junction Networks, for example, had an unexpected outage Oct. 26, 2009 for about an hour and a half, and the company's apology and explanation is a good example of what to do when the inevitable outage does occur. First, apologize.
"We do sincerely apologize for this service interruption. We know that you have many choices for your phone service, and we deeply appreciate your patience and understanding during yesterday's interruption of service. Below are the full details of the service issue."
Then remind users where they can get information if an outage ever occurs again.
"One of the first things we do when a service issue occurs is update our Network Alert Blog and Twitter page with as much information as we have at that time. We then post comments to that original post as we learn more. Our Network Alert blog is here: http://www.junctionnetworks.com/blog/category/network-alerts"
"Our Twitter account is: http://www.twitter.com/onsip."
Junction Networks then provides a detailed description of its normal maintenance activities, which can cause "planned outages" with an intentional shift to backup systems.
"As a rule, Junction Networks maintains three different types of maintenance windows:
1.) Weekend - early morning: The maintenance performed will produce a service disruption and could affect multiple systems.
2.) Weekday - early morning: The maintenance performed may produce a service disruption, but is isolated to a single system.
3.) Intra-day: The work performed should not affect our customers.
All maintenance, even that which is known to cause a service disruption, is not expected to cause a disruption for more than a few fractions of a second. For anything that would cause a more serious disruption (one second or more), backup services are swapped in to take the place of the maintenance system."
The company then explains why the specific Oct. 26 outage happened, in some detail, and then the remedies it applied.
Nobody likes outages, but they are a fact of life. If you think about it, there is a very simple reason. Consider today's electronic devices, designed to work with only minutes to hours to several days worth of "outages" each year. If you've ever had to reboot a device, that's an outage. If you've ever had software "hang," requiring a reboot, that's an outage.
Now imagine the number of normally reliable devices that have to be connected in series to complete any point-to-point communications link. That's the number of applications running, on the servers, switches, routers and gateways, on the active opto-electronics in all networks that must be connected for any single point-to-point session to occur.
Don't forget the power supplies, power grid, air conditioners and potential accidents that can take a session out. If a backhaul cuts an optical line, you get an outage. If a car knocks down a telephone pole, you can get an outage.
Now remember your mathematics. Any number less than "one," when multiplied by any other number less than "one," necessarily results in a number that is smaller than the original quantity. In other words, as one concatenates many devices, each individually quite reliable, the reliability or availability of the whole system gets worse.
A single device with 99-percent reliability is expected to fail 3 days, 15 hours and 40 minutes every year. But that's just one device. If any session has 50 possible devices in series, each with that same 99-percent reliability, the system as a whole is reliable only as the multiplied availabilities of each discrete device.
In other words, you have to multiple a number less than "one" by 49 other numbers, each less than "one," to determine overall system reliability.
As an example, consider a system of just 12 devices, each 99.99 percent reliable, and expected to fail about 52 minutes, 36 seconds each year. The whole network would then be expected to fail about 10.5 hours each year.
Networks with less reliability than 99.99 percent or with more discrete elements will fail for longer periods of time.
The point is that outages can be minimized, but not prevented entirely. Knowing that, one might as well have a process in place for the times when service is disrupted.
Labels:
outage
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Palm Pre, iPhone, MyTouch, Droid Compared
Here's one way of comparing some of the latest smartphones, put together by BillShrink.com. One thought comes to mind, when looking at "unsubsidized" cost of these devices.
(Click image for larger view.)
Some users do not apparently like contracts, even if those contracts provide lower handset prices. They should be able to buy their handsets "unlocked" if they choose.
But lots of users, contemplating smartphone prices almost the same as notebooks and PCs, might well prefer the contracts, to get lower handset prices, just as most people say they "hate" commercials but will put up with a certain amount of commercials if it means "free" content access.
In a world that is "one size fits none" rather than "one size fits all," it seems to run counter to consumer preferences to ban any lawful commercial offer. Let people make their own decisions.
On the other hand, if you want to see a dramatic deceleration in smartphone adoption, with all the application innovation that is coming along with those devices, watch what happens if contracts that subsidize handset costs are outlawed.
(Click image for larger view.)
Some users do not apparently like contracts, even if those contracts provide lower handset prices. They should be able to buy their handsets "unlocked" if they choose.
But lots of users, contemplating smartphone prices almost the same as notebooks and PCs, might well prefer the contracts, to get lower handset prices, just as most people say they "hate" commercials but will put up with a certain amount of commercials if it means "free" content access.
In a world that is "one size fits none" rather than "one size fits all," it seems to run counter to consumer preferences to ban any lawful commercial offer. Let people make their own decisions.
On the other hand, if you want to see a dramatic deceleration in smartphone adoption, with all the application innovation that is coming along with those devices, watch what happens if contracts that subsidize handset costs are outlawed.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Saturday, October 31, 2009
Google Voice has 1.4 Million Users
Google Voice has 1.419 million users, some 570,000 of which use it seven days a week, Google says, in information Google apparently released accidentally in a letter to the Federal Communications Commission and discovered by Business Week before the information was discovered and removed.
The early version of the documents also suggested Google has plans to take Google Voice global. Google apparently said it already has signed contracts with a number of international service providers.
The early version of the documents also suggested Google has plans to take Google Voice global. Google apparently said it already has signed contracts with a number of international service providers.
Labels:
consumer VoIP,
Google,
Google Voice,
unified communications
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
How Do You Measure the Value of Something That Has No Price?
Global end user spending on communications services (voice and data, not entertainment video) runs about $1.8 trillion a year or so, one can extrapolate from the most-recent International Telecommunicatons Union statistics.
Fixed line voice probably sits at about the $740 billion range in 2009.
Infonetics Research says VoIP services bring in $21 billion for service providers in the first half, so assume an annual total of $42 billion. Assume 16 percent of those revenues are for trunking services of one sort or another and voice revenues might hit $35 billion or so for the full year.
That suggests VoIP services represent about 4.7 percent of total global voice revenues in 2009.
The point is that VoIP remains a relatively small portion of global voice revenues. But the situation is more complicated than simply how VoIP stacks up as a revenue driver. The larger problem with voice revenues, as everyone agrees, is that it is trending towards becoming an "application," not a service. That means it will sometimes be provided "at no incremental cost," or at "very low incremental cost."
The value VoIP represents cannot be strictly measured using "revenue" metrics, anymore than the value of email or instant messaging or presence can be measured by revenue metrics. Probably all that anyone can say with some assurance is that the value VoIP represents is greater than five percent of the total value of voice communications, as many sessions occur on a "non-charged" basis.
Many years ago, consumers got access to email in one of two ways. They got email access from their employers, or they bought dial-up Internet access and got their email from their ISPs. In neither case has it, or is, possible to calculate the economic value of email, as the measurable "product" for a consumer was the value of the dial-up Internet connection.
Business value is even harder to calculate, as organizations can buy software and hardware to host their own email, and then buy access connections that support any number of applications, without any specific fee required to host email services.
The larger point is that, in future years, the service revenue attributable directly to voice services will be a number that might remain flat, might grow or might shrink. If voice revenues ultimately shrink, as they might in many markets, or if VoIP replaces TDM versions of voice, that will not necessarily mean that people are talking less, or that the value ascribed to voice is less.
It simply will mean that the value is only indirectly measurable. Only one thing can be said for sure. Markets whose products cannot be directly measured will not be measured. The first sign of this is the increasing use of metrics such as "revenue generating units" or "services per customer" or "average revenue per user."
At some point, though it might still be a measurable quantity, the value of voice services will be only partially represented by "service" revenue. It's tough to measure the value of something that has no specific "incremental cost."
So what will market researchers and agencies do? What they have done before: they will measure the value of some associated product that does have a market price. They will measure the value of purchased access connections, rather than particular applications, much as one could measure ISP access subscriptions, but not the value of email.
Fixed line voice probably sits at about the $740 billion range in 2009.
Infonetics Research says VoIP services bring in $21 billion for service providers in the first half, so assume an annual total of $42 billion. Assume 16 percent of those revenues are for trunking services of one sort or another and voice revenues might hit $35 billion or so for the full year.
That suggests VoIP services represent about 4.7 percent of total global voice revenues in 2009.
The point is that VoIP remains a relatively small portion of global voice revenues. But the situation is more complicated than simply how VoIP stacks up as a revenue driver. The larger problem with voice revenues, as everyone agrees, is that it is trending towards becoming an "application," not a service. That means it will sometimes be provided "at no incremental cost," or at "very low incremental cost."
The value VoIP represents cannot be strictly measured using "revenue" metrics, anymore than the value of email or instant messaging or presence can be measured by revenue metrics. Probably all that anyone can say with some assurance is that the value VoIP represents is greater than five percent of the total value of voice communications, as many sessions occur on a "non-charged" basis.
Many years ago, consumers got access to email in one of two ways. They got email access from their employers, or they bought dial-up Internet access and got their email from their ISPs. In neither case has it, or is, possible to calculate the economic value of email, as the measurable "product" for a consumer was the value of the dial-up Internet connection.
Business value is even harder to calculate, as organizations can buy software and hardware to host their own email, and then buy access connections that support any number of applications, without any specific fee required to host email services.
The larger point is that, in future years, the service revenue attributable directly to voice services will be a number that might remain flat, might grow or might shrink. If voice revenues ultimately shrink, as they might in many markets, or if VoIP replaces TDM versions of voice, that will not necessarily mean that people are talking less, or that the value ascribed to voice is less.
It simply will mean that the value is only indirectly measurable. Only one thing can be said for sure. Markets whose products cannot be directly measured will not be measured. The first sign of this is the increasing use of metrics such as "revenue generating units" or "services per customer" or "average revenue per user."
At some point, though it might still be a measurable quantity, the value of voice services will be only partially represented by "service" revenue. It's tough to measure the value of something that has no specific "incremental cost."
So what will market researchers and agencies do? What they have done before: they will measure the value of some associated product that does have a market price. They will measure the value of purchased access connections, rather than particular applications, much as one could measure ISP access subscriptions, but not the value of email.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Will Moore's Law "Save" Bandwidth Providers, ISPs?
In the personal computer business there is an underlying assumption that whatever problems one faces, Moore's Law will provide the answer. Whatever challenges one faces, the assumption generally is that if one simply waits 18 months, twice the processing power or memory will be available at the same price.
For a business where processing power and memory actually will solve most problems, that is partly to largely correct.
For any business where the majority or almost all cost has nothing to do with the prices or capabilities of semiconductors, Moore's Law helps, but does solve the problem of continually-growing bandwidth demand and continually-decreasing revenue-per-bit that can be earned for supplying higher bandwidth.
That is among the fundamental problems network transport and access providers face. And Moore's Law is not going to solve the problem of increasing bandwidth consumption, says Jim Theodoras, ADVA Optical director of technical marketing.
Simply put, most of the cost of increased network throughput is not caused by the prices of underlying silicon. In fact, network architectures, protocols and operating costs arguably are the key cost drivers these days, at least in the core of the network.
The answer to the problem of "more bandwidth" is partly "bigger pipes and routers." There is some truth that notion, but not complete truth. As bandwidth continues to grow, there is some point at which the "protocols can't keep up, even if you have unlimited numbers of routers," says Theodoras.
The cost drivers lie in bigger problems such as network architecture, routing, backhaul, routing protocols and personnel costs, he says. One example is that there often is excess and redundant gear in a core network that simply is not being used efficiently. In many cases, core routers only run at 10 percent of their capacity, for example. Improving throughput up to 80 percent or 100 percent offers potentially an order of magnitude better performance from the same equipment.
Likewise, automated provisioning tools can reduce provisioning time by 90 percent or more, he says. And since "time is money," operating cost for some automated operations also can be cut by an order of magnitude.
The point is that Moore's Law, by itself, will not provide the solutions networks require as they keep scaling bandwidth under conditions where revenue does not grow linearly with the new capacity.
For a business where processing power and memory actually will solve most problems, that is partly to largely correct.
For any business where the majority or almost all cost has nothing to do with the prices or capabilities of semiconductors, Moore's Law helps, but does solve the problem of continually-growing bandwidth demand and continually-decreasing revenue-per-bit that can be earned for supplying higher bandwidth.
That is among the fundamental problems network transport and access providers face. And Moore's Law is not going to solve the problem of increasing bandwidth consumption, says Jim Theodoras, ADVA Optical director of technical marketing.
Simply put, most of the cost of increased network throughput is not caused by the prices of underlying silicon. In fact, network architectures, protocols and operating costs arguably are the key cost drivers these days, at least in the core of the network.
The answer to the problem of "more bandwidth" is partly "bigger pipes and routers." There is some truth that notion, but not complete truth. As bandwidth continues to grow, there is some point at which the "protocols can't keep up, even if you have unlimited numbers of routers," says Theodoras.
The cost drivers lie in bigger problems such as network architecture, routing, backhaul, routing protocols and personnel costs, he says. One example is that there often is excess and redundant gear in a core network that simply is not being used efficiently. In many cases, core routers only run at 10 percent of their capacity, for example. Improving throughput up to 80 percent or 100 percent offers potentially an order of magnitude better performance from the same equipment.
Likewise, automated provisioning tools can reduce provisioning time by 90 percent or more, he says. And since "time is money," operating cost for some automated operations also can be cut by an order of magnitude.
The point is that Moore's Law, by itself, will not provide the solutions networks require as they keep scaling bandwidth under conditions where revenue does not grow linearly with the new capacity.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
How Many People Will Buy a 50 Mbps Access Service?
Virgin Media now says it has 20,000 subscribers buying its 50 Mbps service. Virgin Media has about 3.77 million broadband access customers. So that suggests about one half of one percent of its customers are buying that grade of service.
I'd be willing to bet U.S. service providers offering a 50 Mbps service are doing about that rate as well, with one possible exception. SureWest Communications has been offering tiers that fast longer than anybody else I can think of, and probably can claim a higher subscription rate.
Virgin Media's current promotion for the 50 Mbps product offers a price of £18 a month (about $29.74) for three months and £28 (about $46.26) a month after that, when bundled with aVirgin Media phone line.
Those sorts of prices will make U.S. consumers jealous, but it is hard to compare pricing across regions and nations. Voice and text message prices on mobiles are far higher than in the United States, though broadband and video entertainment prices seem to be lower, across the board.
SureWest's 50 Mbps and 100 Mbps products are different, though, as they offer symmetrical bandwidth, not asymmetrical as is typical of DOCSIS 3.0 services such as provided by Virgin Media.
When SureWest first introduced its 50 Mbps symmetrical product, it was available as part of a high-end quadruple play bundle including the 50 Mbps access service; a 250-channel digital TV service; unlimited local and long distance telephone and unlimited wireless.
The package was priced at $415.18 a month. If it were offered on a stand-alone basis, SureWest said the 50 Mbps service would be valued at $259.95 per month. Not many consumers are interested in paying that much.
I'd be willing to bet U.S. service providers offering a 50 Mbps service are doing about that rate as well, with one possible exception. SureWest Communications has been offering tiers that fast longer than anybody else I can think of, and probably can claim a higher subscription rate.
Virgin Media's current promotion for the 50 Mbps product offers a price of £18 a month (about $29.74) for three months and £28 (about $46.26) a month after that, when bundled with aVirgin Media phone line.
Those sorts of prices will make U.S. consumers jealous, but it is hard to compare pricing across regions and nations. Voice and text message prices on mobiles are far higher than in the United States, though broadband and video entertainment prices seem to be lower, across the board.
SureWest's 50 Mbps and 100 Mbps products are different, though, as they offer symmetrical bandwidth, not asymmetrical as is typical of DOCSIS 3.0 services such as provided by Virgin Media.
When SureWest first introduced its 50 Mbps symmetrical product, it was available as part of a high-end quadruple play bundle including the 50 Mbps access service; a 250-channel digital TV service; unlimited local and long distance telephone and unlimited wireless.
The package was priced at $415.18 a month. If it were offered on a stand-alone basis, SureWest said the 50 Mbps service would be valued at $259.95 per month. Not many consumers are interested in paying that much.
Labels:
broadband,
bundles,
marketing,
SureWest,
Virgin Group
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Subscribe to:
Posts (Atom)
Will Generative AI Follow Development Path of the Internet?
In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...
-
We have all repeatedly seen comparisons of equity value of hyperscale app providers compared to the value of connectivity providers, which s...
-
It really is surprising how often a Pareto distribution--the “80/20 rule--appears in business life, or in life, generally. Basically, the...
-
One recurring issue with forecasts of multi-access edge computing is that it is easier to make predictions about cost than revenue and infra...