Thursday, June 26, 2008

Intel Sees "No Compelling Case" for Vista

New York Times staff reporter Steve Lohr says Intel has decided against upgrading the computers of its own 80,000 employees to Microsoft’s Vista operating system

“This isn’t a matter of dissing Microsoft, but Intel information technology staff just found no compelling case for adopting Vista,” an Intel source says.

To be sure, large enterprises have all sorts of applications that might have to be upgraded or modified when making a major operating system change. Consumers don't generally have those problems.

Still, it's a bit striking when a major Microsoft partner makes a decision like that. Chipmakers like it when new operating systems and apps require lots more powerful processors and lots more memory. Except when it's their money, apparently.

ICANN Changes Domain Name Scheme

The Internet Corporation for Assigned Names and Numbers has voted unanimously to relax rules "top-level" domain names, such as .com or .uk. by 2009.

The decision means that companies could turn brands into Web addresses, while individuals could use their names.

Domain names written in scripts, such as Asian and Arabic, also were approved.

At the moment, top-level domains are limited to individual countries, such as .uk (UK) or .it (Italy), as well as to commerce, .com, and to institutional organisations, such as .net, or .org.
BBC infographic showing domain name sales

Under the new plans, domain names can be based on any string of letters. Individuals will be able to register a domain based on their own name, for example, as long as they can show a "business plan and technical capacity".

Companies will be able to secure domain names based on their intellectual property, such as individual brand names.

Broadband "Over-supply" Problem

Cox Communications is doubling the download speed of its most popular residential Internet service (Preferred) in Northern Virginia and increasing the speed of its Premier service by thirty-three percent.

PowerBoost, which supplies temporary "burst" bandwidth for uploads, also has been added to the Preferred and Premier packages. PowerBoost for downloads has been available since 2007. This is the fifth consecutive year that Cox has enhanced the speed of its Internet services in northern Virginia (credit Verizon's FiOS service for that).

Verizon has boosted FiOS downstream speeds to 50 Mbps, with 20 Mbps upstream, for its top package, available everywhere FiOS is sold.

Cox customers will get the speed increases automatically in July, without need for a call or technician visit.

The PowerBoost feature means uses of the Preferred package will experience speeds up to 12.5 Mbps down/2.5 Mbps up. Premier customers can achieve 25 Mbps down/3.5 Mbps up.

Policy advocates often complain about the U.S. "broadband problem." Sometimes they mean it isn't available, isn't fast enough or costs too much. The evidence suggests availability isn't a problem. Whether a service is "fast enough" is a matter of interpretation, but I don't see evidence of anything but increasing speeds, often for the same cost. "Price" likewise is an issue.

With the exception of Japan and South Korea, though, cost per Mbps in the United States is quite comparable to nearly all other leading nations.

Complaining about broadband is a bit like similar observations we could easily have made about wireless penetration or use of text messaging, where U.S. users lagged way behind European users for quite some time. That "problem" doesn't exist anymore.

Neither will the "broadband" problem. Have there been issues with availability and speed? Yes. Are those problems in the process of resolution? Yes. Pointing to the existence of problems is fine. Ignoring clear evidence that problems rapidly are being fixed is either misinformed, intellectually dishonest or sloppy.

Some people like to say the definition of broadband is a problem, pointing to data collection that defines "broadband"--at minimum--as 200 kbps. That is wrong, also. The FCC recently changed its minimum definition to 768 kbps. A couple of points.

The only definition the global telecom industry ever has formally set was way back when ISDN was created. Broadband still formally is defined as any bit rate over "voice" rates of 64 kbps. So 128 kbps "traditionally" has been considered "broadband."

Market have moving definitions. But you can hardly fault the FCC for initially setting a minimum standard that is in fact above the recognized global nomenclature. In recent practice, industry executives might have considered broadband to be 1.544 Mbps or above, while anything between 64 kbps and 1.544 Mbps is "wideband."

All that is meaningless. It will be even more meaningless when cable operators start branding some broadband speeds as "wideband," to suggest it is more bandwidth than "broadband." Markets may like that. But it doesn't change the only formal definition the global engineering community ever has embraced.

Also, "minimum" is one thing. "Maximum" or "mean" are other things. Megabit access now is the norm. Targets will continue to shift higher over time. Call it the broadband version of grade inflation. The minimum "passing" grade might be a "D." That doesn't mean people expect that to be the norm.

The United States once had a major "broadband" availability problem. It no longer has. There are places where "access" by wire remains a problem. Most of those places have satellite alternatives, though. And many places have fixed wireless access as well.

Honestly, most potential users have one or two wired networks to choose from, two satellite providers and two or three mobile providers. Many consumers soon will be able to choose from as many as five mobile broadband providers.

Under-supply won't be an issue for most, much longer. Over-supply is the looming problem.

Compute Remotely, Assemble Locally

There's an obvious relationship between cloud computing, "over the top" applications, open networks, open devices, importance of application program interfaces.

The ability to compute, store data and execute code remotely means it is more affordable than ever for small developers and individuals to create applications that are immediately available to users anywhere. The existence of those applications "on the Web" makes the Web a more-powerful platform for bringing applications of any sort to market. That puts business pressure on walled garden business models of all sorts.

The existence of cloud computing also means software is becoming unbundled from hardware to a large extent. Not completely unbundled; not unbundled for every application or service. In fact, some apps require tight integration to execute with the greatest elegance. But the direction is more in the direction of how people use PCs than how they consume cable television.

The application explosion, built on open platforms and APIs, also means new applications can be built on the shoulders of existing apps and applets. Assembling apps begins to be a process akin to what one does with Legos, to oversimplify.

That also means apps more often are created globally, assembled locally. That has implications for browsers, networks and protocols. To assemble apps locally means a premium for rapid response. If assembled apps are to mimic the feel of locally-stored apps, response time is a crucial requirement. This requires more than big, fast pipes. It means browsers that are much faster than we have used in the past. It means a computing architecture that does not require so much traversing of wide area networks to grab app elements.

The issue is to answer a question: “How do I pair together one customer that’s CPU-intensive and another that’s IO-intensive and have the sum appear just like a single, well performing application?”.

There is lots of room for innovation here. And lots of distance to cover. But it's coming, even if most users only gradually are being exposed to use of remote and locally-assembled apps.

Wednesday, June 25, 2008

Where is the Network API?

At the recent "Rethinking Communications" conference, several panelists commented about the difficulty of creating anything like an application program interface to "the network." APIs are a common way to hide the details of any application or function from software developers. The idea is to compartmentalize functions enough that a developer doesn't have to know how everything works; only what is necessary to invoke some function or operation, or add some function.

Right now the problem is that the "network" is full of subsystems that aren't actually unified enough to present a single API to any third party developer. IP Multimedia Subsystem will help, and right now Session Initiation Protocol comes as close as anything to being an API, though the analogy is rough.

The other issue: programmers, almost by nature, will stress test the limits of any network demarcation a network wishes to expose. "Give them an inch; they'll take a mile," Trevor Baca, Jaduka VP, says.

That isn't likely to raise comfort levels on the carrier side. But some middle ground has to be reached if carriers are to benefit from skills third party developers can put to work.

Cox Ups Speed in Phoenix

Cox Communications is bumping up the speeds of its cable modem service in Phoenix. For customers with Cox's Preferred service, downloads will move from 7 Mbps per second to 9 Mbps, with upload speeds increasing from 512 kilobytes per second to 768 kbps.

For the Premier service, customers will get 15 Mbps with burst of up to 20 Mbps for download speeds with uploads starting at 1.5 Mbps and capable of bursts up to 2 Mbps.

The additional speed comes at no additional cost. Qwest Communications is upping its digital subscriber line service to 12 Mbps for its lower-cost service and 20 Mbps for its higher-cost service.

Still, there are some who argue the United States is "falling behind" other nations, suffering from inadequate supply, high prices, slow speeds, or all of the above. One can argue about that.

One cannot argue the problem is not being addressed. Speeds keep climbing, for the same amount of money, everyplace telcos and cable compete with each other.

iPod Still Top Seller, Store Personnel Report

In a recent Tickermine survey of stores selling MP3 players in June 2008, including Best Buy, Radio Shack and Circuit City, the iPod emerged as the best-selling MP3 player by 82 percent of those polled.

Microsoft's Zune 80 GB was said to be best selling by 12 percent of respondents.The SanDisk Sansa Clip 2GB was said to be the best seller by six percent of respondents.

Some 62 percent of respondents say a dedicated music player is a better choice than a music-capble phyone, but 38 percent reported they preferred music-capable mobile phones because it means one less item in your pocket to contend with.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...