Thursday, February 4, 2010

Bad and Worse News on Job Front

Unemployment rose in most cities and counties in December, signaling that companies remain reluctant to hire even as the economy recovers, according to a new report from the U.S. Labor Department.

The unemployment rate rose in 306 of 372 metro areas, the Labor Department says. As bad as that is, matters may be worse.

Job losses during the recession may have been underestimated by close to a million jobs. The prevailing figure is that the recent recession cost more than seven million jobs. It appears the Labor Department might have to revise those numbers, making the actual total eight million.

The shockingly bad news is that over the last 10 years, according to ADP data, the United States actually has added no net new jobs.

In December 2000 there were 111.65 million U.S. employees working. In January 2010 there were 108.14 million Americans working.

In May 2008 there were 115.2 million U.S. workers. That means the country must add back 7.1 million jobs--or more likely 8.1 million--to get back to where it was before the recent recession began.

That raises a question many of us have not been asking. Up to this point, the issue has been "when will the recession end?" with the implicit assumption that a relatively normal job recovery pattern would follow.

The recovery appears to have started, though we will have to wait for some time to date the actual turning. point.

The new question is what happens to growth rates and job recovery as the recovery continues.

Some have argued that consumer behavior has permanently altered because of the severity of the recession, which would imply a slower rate of growth, even if other negatives were not in place.

But there is no way to test the thesis of new consumer behavior patterns in the near term, because it will take years before consumers really are free to choose new patterns of behavior. There is a difference between "permanent" changes in behavior and "temporary" changes. We seem at the moment stuck in a "temporary" mode: people simply are not free to change their behavior at the moment. So long-term conclusions cannot be drawn.

That has obvious implications for the marketing of most consumer products and services. The recession is over, but recessionary buying habits will persist for some time. We cannot know whether these changes are permanent or cyclical.

Will Facebook Become a News Portal?

Is Facebook encouraging direct distribution of news content? Yes. Will it become an important "news portal"? That's hard to say, yet. But there is no question more news is appearing on Facebook, and that Facebook is encouraging that trend.

"Your friends on Facebook help you cut through the clutter so you can read what's most relevant to you, discover new items and carry on thoughtful discussions," says the Facebook blog.

"Just as your friends can post news throughout the day, so do many news outlets," Facebook says. By connecting with friends' Facebook Pages, users can stay updated and interact with outlets such as The New York Times, The Guardian and CNN, CBS Evening News and CNBC, Facebook suggests.

"At any given time, the news on your home page can consist of celebrity gossip posted by your sister, sports scores from the ESPN Page, and a political debate among your friends as they cite their favorite blogs," Facebook notes. "With so much information at your fingertips on one site, Facebook can serve as your personalized news channel.

By way of comparison, Google Reader recently accounted for .01 percent of upstream visits to news and media websites Google News accounted for 1.39 percent of visits and Facebook 3.52 percent.

In fact, Facebook recently was the fourth-largest source of visits to news sites, after Google, Yahoo! and msn.

Wednesday, February 3, 2010

"The Perils of Prosperity: The Story Behind the Economic Crisis"

This essay is by noted U.S. economist Robert Samuelson. Those of you who studied economics in college read his books. It's a great piece. Having lived through bubbles, this is starting to make much more sense to me. Perhaps it will make sense to you, as well.

WASHINGTON -- We need to get the story straight. Already, a crude consensus has formed over what caused the financial crisis. We were victimized by dishonest mortgage brokers, greedy bankers and inept regulators. Easy credit from the Federal Reserve probably made matters worse. True, debate continues over details. Fed Chairman Ben Bernanke recently gave a speech denying that it had loosened credit too much, though he admitted to lax bank regulation. A congressionally created commission opened hearings on the causes of the crisis. Still, the basic consensus seems well-established and highly reassuring. It suggests that if we toughen regulation, suppress outrageous avarice and improve the Fed's policies, we can prevent anything like this from ever occurring again.

There's only one problem: The consensus is wrong -- or at least vastly simplified.

Viewed historically, what we experienced was a classic boom and bust. Prolonged prosperity dulled people's sense of risk. With hindsight, we know that investors, mortgage brokers and bankers engaged in reckless behavior that created economic havoc. We know that regulators turned a blind eye to practices that, in retrospect, were ruinous, unethical and sometimes criminal. We know that the Fed kept interest rates low for a long period (the overnight Fed funds rate fell to 1 percent in June 2003). But the crucial question is: Why? Greed and shortsightedness didn't suddenly burst forth seven or eight years ago; they are constants of human nature.

One answer is this: Speculation and complacency flourished, because the prevailing view was that the economy and financial system had become safer. For a quarter-century, from 1983 to 2007, the United States enjoyed what was arguably the greatest prosperity in its history. The boom was triggered by the conquest of high inflation, which had destabilized the economy since the late 1960s. From 1970 to 1984, inflation dropped from almost 13 percent to 4 percent. By 2001, it was 1.6 percent. As inflation fell, interest rates followed -- though the relationship was loose -- and as interest rates fell, the stock market and housing prices soared. From 1980 to 2000, the value of household stocks and mutual funds increased from about $1 trillion to nearly $11 trillion. The median price for existing homes rose from $62,200 in 1980 to $143,600 in 2000; by 2006, it was $221,900.

Feeling enriched by higher home values and stock portfolios, many Americans skimped on savings or borrowed more. The personal saving rate dropped from 10 percent of disposable income in 1980 to about 2 percent 20 years later. The parallel surge in consumer spending, housing construction and renovation propelled the economy and created jobs, 36 million of them from 1983 to 2001. There were only two recessions in these years, both historically mild: those of 1990-91 and 2001. Monthly unemployment peaked at 7.8 percent in mid-1992.

The hard-won triumph over double-digit inflation in the early 1980s, engineered by then-Fed Chairman Paul Volcker and backed by newly elected President Ronald Reagan, qualifies as one of the great achievements of economic policy since World War II. The temptation is to portray it as a pleasing morality tale. The economic theories that led to higher inflation were bad; the theories that subdued higher inflation were good. Superior ideas displaced inferior ones, and the reward was the increased prosperity and economic stability of the 1980s and later. But that, unfortunately, is only half the story.

Success also planted the seeds of disaster by creating self-defeating expectations and behaviors. The huge profits made in these decades by both professional and amateur investors conditioned many to believe in the underlying benevolence of financial markets. Although they might periodically go to excess, they would ultimately self-correct without too much collateral damage. The greater stability of the real economy -- by contrast, there had been four recessions of growing severity from the late 1960s to the early 1980s -- provided an anchor. The Fed was also a backstop: Under Alan Greenspan, it was lionized for averting deep downturns after the stock market crash of 1987 and the burst "tech bubble" in 2000. Money managers, regulators, economists and the general public all succumbed to these seductive beliefs.

The explosion of the subprime-mortgage market early in the new century may now appear insane, but it had a logic. Housing prices would continue rising, because they had consistently risen for two decades. Consumers could borrow and spend more because their wealth was constantly expanding and they were less threatened by recession. That justified relaxed lending standards. Similarly, investment banks could take on more "leverage." Homeowners with weak credit histories could refinance loans on more favorable terms in two or three years, because the value of their houses would have risen. If borrowers defaulted, lenders could recover their money because the underlying homes would be worth more.

The paradox is that, thinking the world less risky, people took actions that made it more risky. The pleasures of prosperity backfired. They bred carelessness and complacency. If regulation was lax, the main reason was that regulators -- like the lenders, investors and borrowers they regulated -- shared the conventional wisdom. Markets seemed to be working. Why interfere? That was the lesson of experience, not an abstract devotion to the theory of "efficient markets," as is now increasingly argued. Euphoria, or something close to it, was considered realism.

Unless we get the story of the crisis right, we may be disappointed by the sequel. The boom-bust explanation does not exonerate greed, shortsightedness or misguided government policies. But it does help explain them. It doesn't mean that we can't -- or shouldn't -- take steps to curb dangerous risk-taking. Some of the Obama administration's proposals for "financial reform" make sense. Greater capital requirements would protect banks from losses; the ability to control the shutdown of large, failing financial institutions might avoid the chaos of the Lehman Brothers collapse; moving the trading of many "derivatives" (such as "credit-default swaps") to exchanges would create more transparency in financial markets. Because the government ultimately stands behind financial markets, regulation is justified to limit taxpayer expense and to prevent catastrophic economic instability.

But it's neither possible -- nor desirable -- to regulate away all risk. Every "bubble" is not a potential Depression. Popped bubbles and losses must occur to deter speculation and compel investors and borrowers to evaluate risk. The overregulation of finance may discourage useful innovation and clog the channels for capital on which an expanding economy depends. Finally, a single-minded focus on the blunders of Wall Street may also distract us from other possible sources of future crises, including excessive government debt and borrowing.

The larger lesson of the recent crisis is sobering. Modern, advanced democracies strive to deliver as much prosperity as possible to as many people as possible for as long as possible. They are in the business of creating perpetual booms. The cruel contradiction is that this promise itself may become a source of instability, because the more it is attained, the more people begin acting in ways that ultimately invite its destruction. Booms often have unintended and nasty side effects. Even anticipated side effects that are ultimately unsustainable -- stock market "bubbles," excessively tight labor markets -- can be hard to police, because they're initially popular and pleasurable.

The quest for ever-more and ever-better prosperity subverts itself. It might be better to tolerate more frequent, milder recessions and financial setbacks than to strive for a sustained prosperity that, though superficially more appealing, is unattainable and ends in a devastating bust. That's a central implication of the crisis, but it poses hard political and economic questions that haven't yet been asked, let alone answered.

This essay is adapted from the paperback edition of "The Great Inflation and Its Aftermath: the Past and Future of American Affluence" by Robert J. Samuelson, published by Random House at the end of January.

ADP Reveals Shocking Decade-Long Employment Data

Medium-sized businesses are back in hiring mode, according to ADP. That's the good news, since medium-sized businesses employ more Americans than big corporations and almost as many as small businesses. Nationally, these firms employ more than 42 million Americans, far more than large companies (17.8 million) and nearly as many as small businesses (48 million).

The bad news is that small businesses shed another 12,000 jobs and large businesses shed 19,000 in January 2010.

The really bad news is that over the last 10 years, according to ADP data, the United States actually has added no net new jobs.

In December 2000 there were 111.65 million U.S. employees working.

In January 2010 there were  108.14 million Americans working. From March 2007 to May 2008 U.S. employment was above the 115 million mark.

Overall, the economy still lost 22,000 jobs between December and January, according to the ADP report.

In May 2008 there were 115.2 million U.S. workers. That means the country must add back 7.1 million jobs to get back to where it was before the recent recession began.

One might argue that means 7.1 million U.S. families that are spending far less than they used to, on communications and all sorts of other things. But that completely understates the matter. Several hundred other million consumers have ratcheted their spending down as well.

All of that likely means several more years of slow economic growth, as consumers restructure their finances, government at all levels finds it simply cannot spend so much because the tax revenue isn't there, and the other long-term impact of unusual and unprecedented government indebtedness starts to be felt.

Some have argued that consumer behavior has permanently altered. One doesn't even have to go that far to predict a long, sluggish climb back up. Behavior now is constrained in real ways. It isn't a matter of permanently altered behavior but rather of sheer inability to behave otherwise.

The recovery has begun. The bad news is that it will be hard to see, and that there is no way to test the thesis of new consumer behavior patterns for some years, because it will take years before consumers really are free to choose.

How Many New Broadband Access Lines Will be Added by Broadband Stimulus?


For most applicants, Feb. 16, 2010 to March 15, 2010 is the window for filing "broadband stimulus" requests to the Rural Utilities Service and National Telecommunications & Information Administration programs. 

Satellite providers largely will be waiting for a new "third round" aimed at funding satellite projects, funded by the RUS.  A funding window will open "later" to provide grants for satellite service for premises that remain unserved after all other Recovery Act broadband funding is awarded, NTIA says. 

It isn't clear how much funding that might entail. The RUS will be disbursing about $2.2 billion in this funding round, while the NTIA will be awarding about $2.6 billion, of which approximately $2.35 billion will be made available for infrastructure projects, $150 million for public computer center projects, and $100 million for sustainable adoption projects. 

Most of the NTIA money is expected to support middle-mile projects, rather than access. Perhaps oddly enough, that decision by NTIA means there will not be a significant increase in new broadband access facilities,. since the middle mile projects, by definition, are "backbone" projects deemed necessary to get broadband backhaul facilities into place, not serve end users. 

The RUS, on the other hand, has said its $2.2 billion will be spent directly to expand access facilities. 

Assume each new broadband line costs just $3000, the figure suggested as an average for new rural broadband deployments. If all $2.2 billion is spent on access facilities, an additional 733,333 new broadband access lines would be added to the national total. Since there are additional costs, the total will be less than that. 

As the bulk of the total RUS funding ($2.3 billion out of a total of $2.5 billion) will be awarded in the second round, and using the same $3,000 per line assumption, of the $200 million awarded in the first round, 66,667 new lines could have been added, for a grand total of 800,000 lines. 

That is not to say the additional middle-mile facilities will not be foundational, and will result in potential new lines later. But there is no particular reason to believe an additional $3,000 per new access line will be required, when the time comes to actually install access facilities.

$7.2 billion for 800,000 lines might be an unfair way to characterize the program, as some of the money will be spent for public access facilities and training, and the middle-mile infrastructure is required for eventual deployment of new access facilities. 

But it is not far from the truth to point out this near-term conclusion: the immediate change in new broadband access lines from the whole broadband stimulus program will be on the order of 800,000. There will be some additional growth when wireless broadband networks funded under the program are able to finish deployment of their new networks, of course.

But the calculation of 800,000 new lines does not subject overhead and other administrative costs that will lessen the total number of added lines. In all likelihood, adding all fixed broadband lines will only bring the total back up to the 800,000 range. 

14% of Information Workers Use Web Conferencing Daily or Weekly

By some surveys, such as this study by Forrester Research, Web conferencing tools still have quite some ways to grow.

Only about 14 percent of information workers use Web conferencing daily or weekly (click on image for larger view).

About a quarter say they use Web conferencing, compared to 26 percent who say they use instant messaging, for example.


The Forrester Research survey of  2,001 U.S. information workers were "a little surprising," the company says.

Despite the heavy investment by a majority of firms, Web conferencing is still used by only one in four information workers. "Given the benefits of real-time collaboration for bridging the distances that divide many teams, it’s troubling that so few information workers use Web meeting technology regularly, Forrester researchers say.

Only four percent of information workers use Web conferencing daily. Workers in this high-need
group are dominated by customer-facing employees in sales and marketing.

For 10 percent of information workers, Web conferencing is a weekly activity, largely driven by customer-facing workers.

About 76 percent of information workers don’t use Web conferencing at all.

Google Maps to Sync Android Mobile and PC Searches

Many users have grown accustomed to the idea that their appointments, contacts and email can be synchronized across their mobile and PC devices. Now Google wants to make that same sort of experience possible in Google Maps run on Android devices.

Google Maps for mobile now will "sync" searches made on PCs with searches on Android mobiles. "Personalized suggestions" make it easy to find places users previously have searched for.
There is one immediately practical value: instead of searching on a PC and printing out directions, users now will simply be able to recall searches and have the information displayed on their mobile screens when they need the information.

"For example, imagine you're on your computer and you come across the Place Page for Mario's Bohemian Cigar Store Cafe," the Google blog say Michael Siliski and Taj Campbell, Google Maps staffers, on the Google Mobile Blog. "When you're ready to go and want to get directions, just open Google Maps on your phone, start typing "mar," and you'll quickly see a suggestion, saving you from re-typing a long query and making it easier and faster to be on your way."

The new feature also adds a way to "mark" places on your own maps that will appear on either a PC or Android display whenever a map near that place is displayed.

"When viewing place details, just press the star icon next to the place name; these starred places are automatically synchronized between desktop and mobile, and can be accessed from both the 'More' menu on your phone and from the My Maps tab on your computer," they say.

"Starring" and "personalized suggestions" both require that users be signed in with their Google account, and "Web History" must be enabled in order to use personalized suggestions.

Both features are available in Google Maps 3.4. On Nexus One phones, users get this version of Maps after accepting the over-the-air update that already is in progress.

For other Android devices, starring and personalized suggestions will soon be available by downloading Google Maps 3.4 from Android Market.

Can These Economic Growth and Unemployment Forecasts be Right?

As part of the annual budget, the Obama White House assumes real gross domestic product growth of 2.7 percent in 2010, followed by 3.8 percent, 4.3 percent and 4.2 percent in 2013.

At the same time, the forecast assumes unemployment of 10 percent in 2010, with a decline to 9.2 percent in 2011, 8.2 percent in 2012 and 7.3 percent in 2013.



I'm no economist, but at least some trained economists have to be wondering how growth can occur at those accelerating rates if unemployment remains so stubbornly high. 


There are some obvious answers, including the possibility that the White House does not actually believe both sets of assumptions are congruent, but have some other compelling political motivations for claiming the figures. 


Other forecasts suggest that we will not recover the lost jobs of the recent recession until 2014 or even later. As consumer spending drives 70 percent of GDP, it is hard to see strong growth and high unemployment at the same time.  


Perhaps growth will be higher, and unemployment less bad, than these numbers suggest. As somebody who believes in the vitality of the U.S. workforce and economy, I would not bet against the United States, if impediments are not thrown in its way. 


But then, I'm not a professional economist. 

Tuesday, February 2, 2010

Comcast To Buy New Global Telecom

Comcast Corp. apparently has agreed to purchase New Global Telecom Inc., according to XChange. The deal should reemphasize the growing role cable operators expect to play in the business IP communications space, beginning with the small business segment.

Based in Golden, Colo., New Global Telecom provides wholesale services to carriers and competitive service providers in the U.S. The company has recently announced a series of private-label deals, under which NGT supplies branded VoIP services to operators like American Broadband Inc.

Text Rules, Even for Older Users

A survey by Tekelec shows that text messaging, once seen as the main communications tool for teenagers and young adults, has become prevalent among older generations. The 500-person survey shows that 60 percent of users older than 45 are just as likely to use SMS as they were to make voice calls from their mobile.

That's perhaps not good news for voice usage but shows the value of text messaging plans. About 40 percent of female users say they "mainly text," rather than talk. About 30 percent of male respondents reported they are likely to text rather than call.

Text messaging also is catching up to e-mail as the preferred means of daily international communication, with 32 percent of responses across all ages preferring SMS, compared to 33 percent who prefer to use email.

So is the fact that text messaging is displacing some amount of voice a good thing for mobile service providers? Not entirely. More than 80 percent of mobile service provider revenue still is derived directly from voice, says Alan Pascoe, Tekelec senior manager.

"Of the remaining data piece, SMS has the largest chunk of revenue and the highest profitability," he says.  "Texting is particularly appealing for operators because nearly every subscriber can do it and networks have sufficient signaling bandwidth."

"Still, profitability isn’t quite keeping up with usage, thanks to all-you-can-eat plans, but operators can reduce costs with a more efficient SMS network infrastructure," Pascoe says.

Pascoe says Tekelec is not sure how much email volume is being displaced by texting. But as a general rule younger users are more comfortable with texting than older users and businesses still prefer email.

"A key reason is that an SMS message implies an urgent request, whereas email is typically less urgent," he says. "Personal communication often revolves around an immediate need, like making plans, so texting is the more natural approach outside of the office."

But email is also more conducive for business tasks like sending attachments, he adds.

So will text messaging ultimately be as "archivable" as email? Certainly operators are looking at a number of ways to "add value and stickiness to SMS offerings, including archiving," Pascoe says.

"The most common ideas we hear discussed are email-like functionalities: archiving, copying, forwarding, black and white lists and group distribution," says Pascoe. "The wild card for text message archiving demand is Google Voice, which allows subscribers to store SMS in Gmail instead of on their phones, keeping messages indefinitely."

"With Google providing this for free, it may be difficult for operators to generate revenue from it," Pascoe notes.

Person-to-person messages are the foundation of SMS, and will dominate for the foreseeable future, he thinks. "But the model is evolving so that growth is strongest for person-to-application, application-to-person and machine-to-machine communications."

Why Cloud Computing is the Finger Pointing at the Moon, Not the Moon


The thing about "cloud computing" is that it is very difficult to isolate and separate from other broader changes in computing infrastructure, all of which are happening simultaneously. We are, most would agree, on the cusp of a change in basic change in computational architecture from "PC" centric to something that might be called "mobile Internet computing," for lack of a more-descriptive and well-understood term.

The point, simply, is that the shift to "cloud-based" computing is inextricably bound up with other crucial changes such as a shift to use of mobile devices as the key end user access device, the rise of Web-based, hosted and remote applications and user experiences.

For most people, businesses and organizations, the shift of geolocational "places" where computing takes place will occur in the background. The main change is the evolution in things that can be done with computational resources.

Aside from something like an order of magnitude more devices that are connected to computing resources, the new mobile Internet will mean the creation of something like a "sensing" fabric will be put into place. Cameras will create "eyes," microphones will create "mouths to speak," and "ears" to hear. Kinesthetic capabilities will create new ways to interact with information overlaid on the "real" or physical world.

All those new devices also will create new possibilities for enriching "location" information. GPS is fine for fixing a location in terms of latitude and longitude. But what about altitude? What about locating devices, people or locations that are in high-rise buildings? Emergency services and first responders need that additional information.

But the possibilities for "sensing" networks grow exponentially once communications, altitude, attitude and other three-dimensional information is available to any application. Lots of medical and recreational devices now can capture biomedical information in real time. Add real-time communications and many other possibilities will open up.

The point is simply that cloud computing as computational architecture will enable other changes, going well beyond simple ability to send and receive information of any sort. The shift to distributed computing will, with mobile sensors, devices and people, lead to vastly-different ability to monitor the environment, process and annotate or contextualize events and objects in the real world with granularity.

That is not to understate the challenges and opportunities for a wide range of companies in the ecosystem, caused directly by a shift of core competencies. By definition, a change of computing eras has always been accompanied by a completely new list of industry leaders.

Keenly aware of that historic precedent, none of today’s computing giants will take anything for granted as the new era begins to take hold. At the same time, it is hard not to predict that key stakeholders of just about every sort might find themselves severely disrupted by the shift.

So far, whole industries ranging from media and music to telecom, advertising and retailing have found themselves struggling to adjust to a world with lower barriers to entry and radically different ways of creating and delivering products and services people want.

As the shift to the next computing paradigm occurs, many more human activities and business models will find themselves subject to attack and change.

Within the global communications business, it should be noted that the incremental growth of just about everything “mobile” will hit an inflection point. Whether that happened in 2009, will happen in 2010 or takes just a bit longer is not the point.

To talk about a world where a trillion devices are connected, in real time, to the Internet, to servers, software and applications, is to talk about a world where mobility IS communications. Mobility will not be merely an important segment of the business, it will be THE business at the end user level.

That is not to say the core backbone networks, data centers and other long-haul and even access networks are unimportant; to the contrary they will be the fundamental underpinning of the “always on, always connected” ecosystem of applications and business activity which will depend on those assets.

Without denigrating in any way the “pipes,” dumb or otherwise, that will be the physical underpinning of all the applications, there is only so much value anybody can wring out of plumbing. Most of the economic value is going to reside elsewhere.

That said, there already are numerous ways to look at cloud computing infrastructure, as it is used to build businesses that create added value.

Almost by definition, cloud computing enables consumption of software and applications that use remote computing facilities. We sometimes call this “software as a service” and the trend is an early precursor of what happens in the shift from PC-based to mobile and cloud-based computing.

Such uses of cloud computing will have intermediate effects on end user experiences. Lots of everyday computing or application experiences will shift away from local computing or storage, and towards on-the-fly rendering.

The shift to utility computing—enterprise use of cloud computing—will shift data centers from “owned and operated” facilities to outsourced services. But that likely will have less impact than the shift to SaaS-based applications.

The former is an “industrial” shift; the latter is more an “end user” shift. And all cloud computing effects will have most impact when they directly touch end user experiences.

Utility computing contributes to many end user experiences, but much utility computing is “behind the scenes.” Hosted applications are, and increasingly will be, everyday experiences for most human beings.

Web services are the area where end user impact will be noticed most strikingly, and where the most-profound transformations will occur, as Web services—mostly mobile—will touch end users with services and features that cannot be provided any other way.

Cloud computing is important, to be sure. But we will miss the bigger picture in focusing too narrowly on what it means for data centers, utility computing services, transport and access providers. Even the huge trend towards mobility is a sub-plot.

Cloud computing will enable an era of ubiquitous computing, with social and economic consequences we cannot begin to imagine. It is a huge business change for all of us in communications. But it is just a finger pointing at the moon; not the moon itself.

Google to Launch App Store for "Google Apps"

Google is preparing to launch an online store in which it will sell third-party business software to Google Apps customers, the Wall Street Journal reports.

The Wall Street Journal says that Google's store could arrive as early as March with the works of third-party developers available as enhancements to Google's office productivity software suite. It appears the store would allow Gmail and Google Docs users to purchase add-ons for niche features too specialized for the mainstream Google Apps product.

The Google Solutions Marketplace contains lists and reviews of third-party software for Google Apps and Enterprise Search, but it does not let you buy the applications directly from Google. That might be what is about to change.

Developers would have to share revenue with Google from sales of their software through the store, and it would be reasonable to assume revenue splits similar to those used by mobile application stores run by Google, Apple, and several other companies.

Typically, the developer gets 70 percent of the revenue.

As iTunes was the "secret sauce" that helped propel the iPod to prominence, and as the App Store has been the surprise attraction for the iPhone, perhaps app stores might provide similar value for service and device providers.

99% of BitTorrent Content Illegal?

A new survey suggests that about 99 percent of available BitTorrent content violates copyright laws, says Sauhard Sahi, a Princeton University student who conducted the analysis.

Some question the methodology, pointing out that the study only looks at content that is available, not content transferred. That might not be such a big distinction, though. Copyright holders are growing more insistent that Internet service providers actively block delivery or sending of such illegal material.

That, in turn, raises lots of issues. BitTorrent can be used in legal ways, so blocking all torrents clearly violates Federal Communications Commission guidelines about use of legal applications on the Internet. That said, the fact that the overwhelming majority of BitTorrent files consist of copyrighted material raises huge potential issues for ISPs that might be asked to act as policemen.

The study does not claim to make judgments about how much copyrighted content actually is downloaded. But it stands to reason that if such an overwhelming percentage of material is copyrighted, that most uploads and downloads will be of infringing content.

The study classified a file as likely non-infringing if it appeared to be in the public domain, freely available through legitimate channels, or  user-generated content.

By this definition, all of the 476 movies or TV shows in the sample were found to be likely infringing.

The study also found seven of the 148 files in the games and software category to be likely non-infringing—including two Linux distributions, free plug-in packs for games, as well as free and beta software.

In the pornography category, one of the 145 files claimed to be an amateur video, and we gave it the benefit of the doubt as likely non-infringing.

All of the 98 music torrents were likely infringing. Two of the fifteen files in the books/guides category seemed to be likely non-infringing.

"Overall, we classified ten of the 1021 files, or approximately one percent, as likely non-infringing," Sahi says.

"This result should be interpreted with caution, as we may have missed some non-infringing files, and our sample is of files available, not files actually downloaded," Sahi says. "Still, the result suggests strongly that copyright infringement is widespread among BitTorrent users."

Monday, February 1, 2010

Private Line Market Starts Decline

After years of steady growth, the $34 billion private line services market is entering a period of declining revenue, says Insight Research. It could hardly be otherwise. Just as IP-based services are displacing TDM-based voice, so IP-based and Ethernet-based bandwidth services are displacing SONET bandwidth services, frame relay and ATM services.

U..S enterprises and consumers are expected to spend more than $27 billion over the next five years on Ethernet services provided by carriers, Insight Research predicts. With metro-area and wide-area Ethernet services now available from virtually all major data service providers, the market is expected to grow at a compounded rate of over 25 percent, increasing from $2.4 billion in 2009 to reach nearly $7.8 billion by 2014.

The decline in revenue will continue from 2009 to 2012. But Insight Research also believes private line revenues will tick up a bit after 2012, presumably as additional applications drive demand for more bandwidth. Why the growth would not come in the form of alternative IP bandwidth is not precisely clear, though.

Insight believes additional demand for wireless backhaul and video will lead to more buying of SONET products. Some of us would disagree, but we shall see.

"The transition away from frame and ATM will put a break on overall private line industry revenue growth for a couple of years," says Robert Rosenberg, company president . "However, private line demand remains strong for wireless backhaul, local bandwidth for caching IPTV video services, and for facilitating VoIP."

Google Nexus One for AT&T?

A device that's almost certainly an AT&T-compatible version of the Google Nexus One has been approved by the Federal Communications Commission. The version now sold by Google works on all T-Mobile USA 3G spectrum. but not on all AT&T 3G bands.

Versions running on Verizon's CDMA air interface and also for Vodafone are expected at some point.

Both the Nexus One and the newly-approved phone are being made by HTC. And while the name of the product in question isn't given, its model number is: 99110. The model number for the current version of Google's smartphone is 99100. These are so close its seems very likely they are from the same series.

On the Use and Misuse of Principles, Theorems and Concepts

When financial commentators compile lists of "potential black swans," they misunderstand the concept. As explained by Taleb Nasim ...