Thursday, June 26, 2008
IP-Based VPNs to Surpass Frame Relay, ATM in 2009
2009 should be the year the installed base of IP-based virtual private networks surpasses the installed base of frame relay and asynchronous transfer mode sites, Vertical Systems Group forecasts suggest.
The shift has been going on for years, but a crossover point would be significant, as it will be when the installed base of IP phone systems surpasses that of digital systems.
Adoption of the ascendant technology gets a boost when vendors begin to slow and end development of legacy applications and gear.
The shift has been going on for years, but a crossover point would be significant, as it will be when the installed base of IP phone systems surpasses that of digital systems.
Adoption of the ascendant technology gets a boost when vendors begin to slow and end development of legacy applications and gear.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Sprint Wins with Instinct
Sprint Nextel Corp. has broken company sales records with its new Samsung Instinct, leading to temporary shortages of the touch-screen phone in some stores, the company says. Despite placing what it calls the largest-ever initial order for a 3G handset, Sprint still underestimated demand, apparently.
E-mail access, Internet browsing, GPS navigation tools, interactive maps and one-touch click-to-call access have met "extremely heavy use," the company says.
The phone costs $129.99 with a two-year contract and a $100 mail-in rebate. The obvious observation: iPhone has had a transforming impact on handset design.
E-mail access, Internet browsing, GPS navigation tools, interactive maps and one-touch click-to-call access have met "extremely heavy use," the company says.
The phone costs $129.99 with a two-year contract and a $100 mail-in rebate. The obvious observation: iPhone has had a transforming impact on handset design.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Intel Sees "No Compelling Case" for Vista
New York Times staff reporter Steve Lohr says Intel has decided against upgrading the computers of its own 80,000 employees to Microsoft’s Vista operating system
“This isn’t a matter of dissing Microsoft, but Intel information technology staff just found no compelling case for adopting Vista,” an Intel source says.
To be sure, large enterprises have all sorts of applications that might have to be upgraded or modified when making a major operating system change. Consumers don't generally have those problems.
Still, it's a bit striking when a major Microsoft partner makes a decision like that. Chipmakers like it when new operating systems and apps require lots more powerful processors and lots more memory. Except when it's their money, apparently.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
ICANN Changes Domain Name Scheme
The Internet Corporation for Assigned Names and Numbers has voted unanimously to relax rules "top-level" domain names, such as .com or .uk. by 2009.
The decision means that companies could turn brands into Web addresses, while individuals could use their names.
Domain names written in scripts, such as Asian and Arabic, also were approved.
At the moment, top-level domains are limited to individual countries, such as .uk (UK) or .it (Italy), as well as to commerce, .com, and to institutional organisations, such as .net, or .org.
BBC infographic showing domain name sales
Under the new plans, domain names can be based on any string of letters. Individuals will be able to register a domain based on their own name, for example, as long as they can show a "business plan and technical capacity".
Companies will be able to secure domain names based on their intellectual property, such as individual brand names.
The decision means that companies could turn brands into Web addresses, while individuals could use their names.
Domain names written in scripts, such as Asian and Arabic, also were approved.
At the moment, top-level domains are limited to individual countries, such as .uk (UK) or .it (Italy), as well as to commerce, .com, and to institutional organisations, such as .net, or .org.
BBC infographic showing domain name sales
Under the new plans, domain names can be based on any string of letters. Individuals will be able to register a domain based on their own name, for example, as long as they can show a "business plan and technical capacity".
Companies will be able to secure domain names based on their intellectual property, such as individual brand names.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Broadband "Over-supply" Problem
Cox Communications is doubling the download speed of its most popular residential Internet service (Preferred) in Northern Virginia and increasing the speed of its Premier service by thirty-three percent.
PowerBoost, which supplies temporary "burst" bandwidth for uploads, also has been added to the Preferred and Premier packages. PowerBoost for downloads has been available since 2007. This is the fifth consecutive year that Cox has enhanced the speed of its Internet services in northern Virginia (credit Verizon's FiOS service for that).
Verizon has boosted FiOS downstream speeds to 50 Mbps, with 20 Mbps upstream, for its top package, available everywhere FiOS is sold.
Cox customers will get the speed increases automatically in July, without need for a call or technician visit.
The PowerBoost feature means uses of the Preferred package will experience speeds up to 12.5 Mbps down/2.5 Mbps up. Premier customers can achieve 25 Mbps down/3.5 Mbps up.
Policy advocates often complain about the U.S. "broadband problem." Sometimes they mean it isn't available, isn't fast enough or costs too much. The evidence suggests availability isn't a problem. Whether a service is "fast enough" is a matter of interpretation, but I don't see evidence of anything but increasing speeds, often for the same cost. "Price" likewise is an issue.
With the exception of Japan and South Korea, though, cost per Mbps in the United States is quite comparable to nearly all other leading nations.
Complaining about broadband is a bit like similar observations we could easily have made about wireless penetration or use of text messaging, where U.S. users lagged way behind European users for quite some time. That "problem" doesn't exist anymore.
Neither will the "broadband" problem. Have there been issues with availability and speed? Yes. Are those problems in the process of resolution? Yes. Pointing to the existence of problems is fine. Ignoring clear evidence that problems rapidly are being fixed is either misinformed, intellectually dishonest or sloppy.
Some people like to say the definition of broadband is a problem, pointing to data collection that defines "broadband"--at minimum--as 200 kbps. That is wrong, also. The FCC recently changed its minimum definition to 768 kbps. A couple of points.
The only definition the global telecom industry ever has formally set was way back when ISDN was created. Broadband still formally is defined as any bit rate over "voice" rates of 64 kbps. So 128 kbps "traditionally" has been considered "broadband."
Market have moving definitions. But you can hardly fault the FCC for initially setting a minimum standard that is in fact above the recognized global nomenclature. In recent practice, industry executives might have considered broadband to be 1.544 Mbps or above, while anything between 64 kbps and 1.544 Mbps is "wideband."
All that is meaningless. It will be even more meaningless when cable operators start branding some broadband speeds as "wideband," to suggest it is more bandwidth than "broadband." Markets may like that. But it doesn't change the only formal definition the global engineering community ever has embraced.
Also, "minimum" is one thing. "Maximum" or "mean" are other things. Megabit access now is the norm. Targets will continue to shift higher over time. Call it the broadband version of grade inflation. The minimum "passing" grade might be a "D." That doesn't mean people expect that to be the norm.
The United States once had a major "broadband" availability problem. It no longer has. There are places where "access" by wire remains a problem. Most of those places have satellite alternatives, though. And many places have fixed wireless access as well.
Honestly, most potential users have one or two wired networks to choose from, two satellite providers and two or three mobile providers. Many consumers soon will be able to choose from as many as five mobile broadband providers.
Under-supply won't be an issue for most, much longer. Over-supply is the looming problem.
PowerBoost, which supplies temporary "burst" bandwidth for uploads, also has been added to the Preferred and Premier packages. PowerBoost for downloads has been available since 2007. This is the fifth consecutive year that Cox has enhanced the speed of its Internet services in northern Virginia (credit Verizon's FiOS service for that).
Verizon has boosted FiOS downstream speeds to 50 Mbps, with 20 Mbps upstream, for its top package, available everywhere FiOS is sold.
Cox customers will get the speed increases automatically in July, without need for a call or technician visit.
The PowerBoost feature means uses of the Preferred package will experience speeds up to 12.5 Mbps down/2.5 Mbps up. Premier customers can achieve 25 Mbps down/3.5 Mbps up.
Policy advocates often complain about the U.S. "broadband problem." Sometimes they mean it isn't available, isn't fast enough or costs too much. The evidence suggests availability isn't a problem. Whether a service is "fast enough" is a matter of interpretation, but I don't see evidence of anything but increasing speeds, often for the same cost. "Price" likewise is an issue.
With the exception of Japan and South Korea, though, cost per Mbps in the United States is quite comparable to nearly all other leading nations.
Complaining about broadband is a bit like similar observations we could easily have made about wireless penetration or use of text messaging, where U.S. users lagged way behind European users for quite some time. That "problem" doesn't exist anymore.
Neither will the "broadband" problem. Have there been issues with availability and speed? Yes. Are those problems in the process of resolution? Yes. Pointing to the existence of problems is fine. Ignoring clear evidence that problems rapidly are being fixed is either misinformed, intellectually dishonest or sloppy.
Some people like to say the definition of broadband is a problem, pointing to data collection that defines "broadband"--at minimum--as 200 kbps. That is wrong, also. The FCC recently changed its minimum definition to 768 kbps. A couple of points.
The only definition the global telecom industry ever has formally set was way back when ISDN was created. Broadband still formally is defined as any bit rate over "voice" rates of 64 kbps. So 128 kbps "traditionally" has been considered "broadband."
Market have moving definitions. But you can hardly fault the FCC for initially setting a minimum standard that is in fact above the recognized global nomenclature. In recent practice, industry executives might have considered broadband to be 1.544 Mbps or above, while anything between 64 kbps and 1.544 Mbps is "wideband."
All that is meaningless. It will be even more meaningless when cable operators start branding some broadband speeds as "wideband," to suggest it is more bandwidth than "broadband." Markets may like that. But it doesn't change the only formal definition the global engineering community ever has embraced.
Also, "minimum" is one thing. "Maximum" or "mean" are other things. Megabit access now is the norm. Targets will continue to shift higher over time. Call it the broadband version of grade inflation. The minimum "passing" grade might be a "D." That doesn't mean people expect that to be the norm.
The United States once had a major "broadband" availability problem. It no longer has. There are places where "access" by wire remains a problem. Most of those places have satellite alternatives, though. And many places have fixed wireless access as well.
Honestly, most potential users have one or two wired networks to choose from, two satellite providers and two or three mobile providers. Many consumers soon will be able to choose from as many as five mobile broadband providers.
Under-supply won't be an issue for most, much longer. Over-supply is the looming problem.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Compute Remotely, Assemble Locally
There's an obvious relationship between cloud computing, "over the top" applications, open networks, open devices, importance of application program interfaces.
The ability to compute, store data and execute code remotely means it is more affordable than ever for small developers and individuals to create applications that are immediately available to users anywhere. The existence of those applications "on the Web" makes the Web a more-powerful platform for bringing applications of any sort to market. That puts business pressure on walled garden business models of all sorts.
The existence of cloud computing also means software is becoming unbundled from hardware to a large extent. Not completely unbundled; not unbundled for every application or service. In fact, some apps require tight integration to execute with the greatest elegance. But the direction is more in the direction of how people use PCs than how they consume cable television.
The application explosion, built on open platforms and APIs, also means new applications can be built on the shoulders of existing apps and applets. Assembling apps begins to be a process akin to what one does with Legos, to oversimplify.
That also means apps more often are created globally, assembled locally. That has implications for browsers, networks and protocols. To assemble apps locally means a premium for rapid response. If assembled apps are to mimic the feel of locally-stored apps, response time is a crucial requirement. This requires more than big, fast pipes. It means browsers that are much faster than we have used in the past. It means a computing architecture that does not require so much traversing of wide area networks to grab app elements.
The issue is to answer a question: “How do I pair together one customer that’s CPU-intensive and another that’s IO-intensive and have the sum appear just like a single, well performing application?”.
There is lots of room for innovation here. And lots of distance to cover. But it's coming, even if most users only gradually are being exposed to use of remote and locally-assembled apps.
The ability to compute, store data and execute code remotely means it is more affordable than ever for small developers and individuals to create applications that are immediately available to users anywhere. The existence of those applications "on the Web" makes the Web a more-powerful platform for bringing applications of any sort to market. That puts business pressure on walled garden business models of all sorts.
The existence of cloud computing also means software is becoming unbundled from hardware to a large extent. Not completely unbundled; not unbundled for every application or service. In fact, some apps require tight integration to execute with the greatest elegance. But the direction is more in the direction of how people use PCs than how they consume cable television.
The application explosion, built on open platforms and APIs, also means new applications can be built on the shoulders of existing apps and applets. Assembling apps begins to be a process akin to what one does with Legos, to oversimplify.
That also means apps more often are created globally, assembled locally. That has implications for browsers, networks and protocols. To assemble apps locally means a premium for rapid response. If assembled apps are to mimic the feel of locally-stored apps, response time is a crucial requirement. This requires more than big, fast pipes. It means browsers that are much faster than we have used in the past. It means a computing architecture that does not require so much traversing of wide area networks to grab app elements.
The issue is to answer a question: “How do I pair together one customer that’s CPU-intensive and another that’s IO-intensive and have the sum appear just like a single, well performing application?”.
There is lots of room for innovation here. And lots of distance to cover. But it's coming, even if most users only gradually are being exposed to use of remote and locally-assembled apps.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Wednesday, June 25, 2008
Where is the Network API?
At the recent "Rethinking Communications" conference, several panelists commented about the difficulty of creating anything like an application program interface to "the network." APIs are a common way to hide the details of any application or function from software developers. The idea is to compartmentalize functions enough that a developer doesn't have to know how everything works; only what is necessary to invoke some function or operation, or add some function.
Right now the problem is that the "network" is full of subsystems that aren't actually unified enough to present a single API to any third party developer. IP Multimedia Subsystem will help, and right now Session Initiation Protocol comes as close as anything to being an API, though the analogy is rough.
The other issue: programmers, almost by nature, will stress test the limits of any network demarcation a network wishes to expose. "Give them an inch; they'll take a mile," Trevor Baca, Jaduka VP, says.
That isn't likely to raise comfort levels on the carrier side. But some middle ground has to be reached if carriers are to benefit from skills third party developers can put to work.
Right now the problem is that the "network" is full of subsystems that aren't actually unified enough to present a single API to any third party developer. IP Multimedia Subsystem will help, and right now Session Initiation Protocol comes as close as anything to being an API, though the analogy is rough.
The other issue: programmers, almost by nature, will stress test the limits of any network demarcation a network wishes to expose. "Give them an inch; they'll take a mile," Trevor Baca, Jaduka VP, says.
That isn't likely to raise comfort levels on the carrier side. But some middle ground has to be reached if carriers are to benefit from skills third party developers can put to work.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Subscribe to:
Posts (Atom)
Will Generative AI Follow Development Path of the Internet?
In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...
-
We have all repeatedly seen comparisons of equity value of hyperscale app providers compared to the value of connectivity providers, which s...
-
It really is surprising how often a Pareto distribution--the “80/20 rule--appears in business life, or in life, generally. Basically, the...
-
One recurring issue with forecasts of multi-access edge computing is that it is easier to make predictions about cost than revenue and infra...