Monday, June 20, 2016

Is There a Spectrum Crunch, or is There Spectrum Abundance? Yes and Yes

According to Cisco, mobile data traffic has grown 4,000-fold over the past 10 years and almost 400-million-fold over the past 15 years, while global mobile data traffic will increase nearly eight-fold between 2015 and 2020 alone.

So service providers always complain about the “spectrum crunch” that threatens to derail progress on the communications front.

On the other hand, many critics of the present spectrum allocation process say there actually is plenty of unused spectrum that could be put to use if we were smarter, and used new tools to allow use of fallow spectrum that already has been licensed.

In that view, hoarding of spectrum to prevent its use--intended or unintentional--is as big a problem as the total amount of usable communications spectrum.

The “unity of opposites” here is that both arguments are correct: there is a lack of available spectrum, as demand for communications soars, and “more” is needed.

On the other hand, there is plenty of available spectrum within the already-allocated 30 MHz to 3 GHz bands, if we could efficiently share its use, while protecting the existing license holders.

One way or the other, the simple answer for the capacity crunch is “more spectrum.”

Since most spectrum useful for communications--from 30 MHz up to about 3,000 MHz--already is allocated, spectrum sharing is the answer to gaining use of huge amounts of spectrum already licensed to existing users.

But spectrum sharing also is the key to efficient use of new spectrum in the millimeter bands (3 GHz up to 300 GHz). The notion is “use it or share it,” rather than the ability to squat on resources that nobody else can use, even if the licensee is not making any use, or only light use, of a resource.

In one important sense, spectrum sharing introduces a market mechanism for spectrum use that is efficient, and encourages licensees not to hoard valuable assets, but put them to work.

The traditional exclusive licensing of spectrum on an “exclusive right to use” basis  is inefficient. In fact, some policy advocates claim that as much as 95 percent of licensed spectrum is not used. Those claims appear to be based on a 2005 National Science Foundation study.  

Other studies of U.S. spectrum use between 30 MHz and 3 GHz did not always find that degree of fallow bandwidth, but the point is that much spectrum is not used much, most of the time.

That, plus the expected incremental demand for mobile and wireless communications, is driving innovations in network architecture (small cells), radio technology, mobile traffic offload, new spectrum (millimeter waves) and spectrum sharing.

All of that suggests the importance of two broad strategies: making better use of spectrum that is available, but lightly used, and opening up non-traditional bands of spectrum that traditionally have been very hard to use for commercial purposes.

And advancements in computing technology driven by Moore’s Law now are crucial to that effort. Simply, cheap and powerful computing makes possible lower-cost sharing of spectrum, as well as commercial use of millimeter waves (above 3 GHz and below 300 GHz) that have been too expensive and too difficult to use, in the past.

In the former case, sophisticated and cheap computing means we can allocate access in real time, in ways that literally were not possible in the past.

In the latter case, we also intentionally can design access systems, using those same techniques that allow robust sharing of resources among a number of potential users.

But just as important is the application of processing power to improve the usefulness of millimeter wave frequencies that are distance-limited (signals do not go so far) and signal propagation limited (signals cannot go through solid objects).

At the same time, the use of small cell networks helps overcome both distance and line-of-sight limitations. Better radio techniques also allow us to “bend” signals around solid objects and recover signals that have become weak or scattered.

“More spectrum” arguably is the single most important issue in communications. But spectrum sharing arguably is the most important tool for securing that needed spectrum.

As you would guess, incumbent service providers, including mobile and satellite firms, oppose sharing, while others, especially app providers, support spectrum sharing. But it seems inevitable that spectrum sharing, following the model of the 3.5 GHz Citizens Broadband Radio Service band, will be proposed.

The proposed CBRS service would use a three-tier access rights system, reserving priority access for existing licensed users, but allowing licensed secondary rights for commercial users where the primary licensee is not using the spectrum, with best effort licensing for other devices and services (on the model of Wi-Fi).

Spectrum (communications capacity) is a complicated matter. On one hand, there never seems to be enough spectrum to handle ever-growing numbers of users, the growing number of connected devices and higher-bandwidth applications such as full-motion video.


According to Cisco, mobile data traffic has grown 4,000-fold over the past 10 years and almost 400-million-fold over the past 15 years, while global mobile data traffic will increase nearly eight-fold between 2015 and 2020 alone.

So service providers always complain about the “spectrum crunch” that threatens to derail progress on the communications front.

On the other hand, many critics of the present spectrum allocation process say there actually is plenty of unused spectrum that could be put to use if we were smarter, and used new tools to allow use of fallow spectrum that already has been licensed.
---------------------------------------------------------------------------------------------------------------------
About Spectrum Futures
--------------------------------------------------------------------------------------------------

In that view, hoarding of spectrum to prevent its use--intended or unintentional--is as big a problem as the total amount of usable communications spectrum.

The “unity of opposites” here is that both arguments are correct: there is a lack of available spectrum, as demand for communications soars, and “more” is needed.

On the other hand, there is plenty of available spectrum within the already-allocated 30 MHz to 3 GHz bands, if we could efficiently share its use, while protecting the existing license holders.

One way or the other, the simple answer for the capacity crunch is “more spectrum.”

Since most spectrum useful for communications--from 30 MHz up to about 3,000 MHz--already is allocated, spectrum sharing is the answer to gaining use of huge amounts of spectrum already licensed to existing users.

But spectrum sharing also is the key to efficient use of new spectrum in the millimeter bands (3 GHz up to 300 GHz). The notion is “use it or share it,” rather than the ability to squat on resources that nobody else can use, even if the licensee is not making any use, or only light use, of a resource.

In one important sense, spectrum sharing introduces a market mechanism for spectrum use that is efficient, and encourages licensees not to hoard valuable assets, but put them to work.

The traditional exclusive licensing of spectrum on an “exclusive right to use” basis  is inefficient. In fact, some policy advocates claim that as much as 95 percent of licensed spectrum is not used. Those claims appear to be based on a 2005 National Science Foundation study.  

Other studies of U.S. spectrum use between 30 MHz and 3 GHz did not always find that degree of fallow bandwidth, but the point is that much spectrum is not used much, most of the time.

That, plus the expected incremental demand for mobile and wireless communications, is driving innovations in network architecture (small cells), radio technology, mobile traffic offload, new spectrum (millimeter waves) and spectrum sharing.

All of that suggests the importance of two broad strategies: making better use of spectrum that is available, but lightly used, and opening up non-traditional bands of spectrum that traditionally have been very hard to use for commercial purposes.

And advancements in computing technology driven by Moore’s Law now are crucial to that effort. Simply, cheap and powerful computing makes possible lower-cost sharing of spectrum, as well as commercial use of millimeter waves (above 3 GHz and below 300 GHz) that have been too expensive and too difficult to use, in the past.

In the former case, sophisticated and cheap computing means we can allocate access in real time, in ways that literally were not possible in the past.

In the latter case, we also intentionally can design access systems, using those same techniques that allow robust sharing of resources among a number of potential users.

But just as important is the application of processing power to improve the usefulness of millimeter wave frequencies that are distance-limited (signals do not go so far) and signal propagation limited (signals cannot go through solid objects).

At the same time, the use of small cell networks helps overcome both distance and line-of-sight limitations. Better radio techniques also allow us to “bend” signals around solid objects and recover signals that have become weak or scattered.

“More spectrum” arguably is the single most important issue in communications. But spectrum sharing arguably is the most important tool for securing that needed spectrum.


14,000 MHz More Unlicensed Spectrum? Yes, Says FCC

The  Federal Communications Commission expects to vote July 14, 2016 on a proposal to free up frequencies above 24 GHz for 5G applications. The Notice of Proposed Rulemaking is prodigious, involving new “flexible use service rules” (spectrum sharing in licensed bands) in the 28 GHz, 37 GHz, 39 GHz, and 64 GHz to 71 GHz frequency ranges.

Part of the plan apparently will call for allocating “Our plan proposes making a massive 14,000 more megaHertiz of unlicensed band,” says Federal Communications Commission Chairman Tom Wheeler.

Compare that to the 100 MHz allocated in the United States for Wi-Fi at 2.4 GHz frequencies, and the 150 MHz allocated in the United States for Wi-Fi at 5.8 GHz frequencies. THe FCC is proposing to release two orders of magnitude (100 times) more unlicensed communications spectrum than presently is available to support Wi-Fi.
---------------------------------------------------------------------------------------------------------------------
About Spectrum Futures

--------------------------------------------------------------------------------------------------

2.2 Billion People Living in Cities Globally Do Not Have Internet Access, Report Claims

Some 57 percent of world’s urban population remains unconnected, either with fixed or mobile broadband. That represents more than 2.2 billion people living in cities across the world, according to  research conducted by Maravedis on behalf of the Wireless Broadband Alliance.


Just over two thirds (68 percent) of people in Asia Pacific have no broadband connection, while 55 percent of people in Latin America are without broadband, Maravedis researchers argue.


Almost a quarter (23 percent) of people in North America have no broadband connection despite having the world’s highest average monthly income.


Europe has the lowest percentage of urban unconnected at 17 percent, the Middle East and Africa has the highest proportion of urban unconnected citizens at 82 percent.

London is the most connected major global city (only 8 percent unconnected), while Lagos is the least connected city (88.3 percent).

---------------------------------------------------------------------------------------------------------------------
About Spectrum Futures
--------------------------------------------------------------------------------------------------

Thursday, June 16, 2016

U.S.Business Markets Spending is Slowly Declining

U.S. business customers are spending less on communication services, says CMR Market Research.  From 2010 to 2014, total business services revenues for all suppliers have decreased by $6.0 billion from $110.0 billion to $104.0 billion, contracting at one percent per year.

AT&T--with $34.3 billion in annual revenue and 32 percent market share, and Verizon, with $22.8 billion in annual revenue and 22 percent market share remain the market leaders, while dozens of companies have some of the remaining 45 percent market share.

Between 2010 and 2014, for example, AT&T and Verizon lost about six percentage points of market share, while cable TV companies gained about eight points of market share. In other words, cable market share has grown, while, in aggregate, other providers have lost share.

Mobility eventually will become a growth area for cable TV providers as well, since the U.S. fixed network business services market is contracting, while the mobile segment is growing.

From 2012 to 2017, the total commercial services wireline market will contract from $95.5 billion to $92.3 billion at a CAGR of negative 0.7 percent, according to Insight Research.

Mobile commercial services, on the other hand, will grow at approximately eight percent during this period.

From 2012 to 2017, cable TV operator supplied commercial services revenue will grow from $7.6 billion to $12.3 billion at a CAGR of 10 percent, Insight Research predicts.

In the past, as much as 85 percent of the current commercial customer base for most major cable TV companies have been small companies with less than 20 employees, according to Cisco.

And for the most part, MSOs commercial customers are heavily concentrated in the below 5 employee businesses. With SMBs typically being defined as the 1-999 market, this leaves a significant portion of that market unaddressed by MSOs.



No Business Model for 5G?

It always is difficult to fully anticipate the business value provided by each successive generation of mobile networks. There always is a stated business case, of course. From the first generation to the second, the advantage was the transition from analog to digital, with the advantages that normally represents.

The shift from 2G to 3G was supposed to be “new applications.” That eventually happened, but not right away. First mobile email and then mobile Internet access were new apps of note, though the use of mobile hotspots also was an important development.

The shift from 3G to 4G generally was said to be “more bandwidth” supporting new applications.

Video apps generally have been the most notable new apps, compared to 3G, although user experience when using the Internet also is far better with 4G. And though it often goes unnoticed, 4G speeds have allowed any number of users to substitute mobile for Internet access.

More U.S. households now seem to be abandoning even fixed Internet access in favor of mobile access, as it now is common for households to rely on mobile voice (more than 46 percent of U.S. households now are “mobile only” for voice) , instead of fixed network voice, or over the top video entertainment in place of traditional subscription services.

In fact, because of mobile use, fixed network Internet access rates actually are dropping in the United States, having reached an apparent peak in 2011.

Still, it is reasonable to argue that there is not yet a clear business case for 5G. But some might argue that has been the case for at least two successive generations of mobile networks. Both 3G and 4G were supposed to lead to development of many new apps.

That has happened, but mostly because of the contributions of third party app developers. On the other hand, both 3G and 4G have offered efficiency gains, something of clear importance for a mature business featuring high competition and therefore margin pressure, plus declining revenue from the legacy apps.

“The business reality is that there is no new money,” argues William Webb, CEO of Weightless SIG and a communications consultant. “So either 5G will need to be delivered within the confines of current operator revenue or it will need to deliver new services that consumers are prepared to pay more for.”

To some extent, both requirements (“no new money” and “new services needed to boost revenue”) likely will be part of the eventual business case.

Actually, one might argue that the percentage of household income devoted to mobile communications actually has grown over the last decade. So there actually is some amount of “new money” being devoted to mobile services, even if that comes with less use of other services, such as fixed network services.

To be sure, U.S. consumer spending on communications is a relatively small portion of household spending, too small to be broken out by the Bureau of Labor Statistics, for example.

In most U.S households, and definitely for households with more than the “mean” number of household members (2.5),  spending on mobile services virtually certainly outpaces spending on all other services, as well as topping spending on component subscriptions (high speed access, all entertainment video and fixed network voice).

From 2007 to 2014, expenditures for mobile phone services increased from a range of 38.7 percent for one-person consumer units to 70.9 percent for consumer units of five or more persons, according to the U.S. Bureau of Labor Statistics.

Also, mobile now represents 73 percent to 80 percent of total household spending on communications.

One-person consumer units have the lowest share of cellular expenditures compared with telephone service expenditures for all household size groups, but the share increased from 49 percent in 2007 to 64.3 percent in 2014.

In contrast, fixed network voice accounted in 2014 for just about 27 percent of household spending. The perhaps-obvious question is how much is spent on high speed Internet access, something hard to glean from Bureau of Labor Statistics data.

More significantly, BLS data shows that U.S. household spending on communications is growing, and has been growing since 1990.


In the United Kingdom, households spend between three percent and four percent of income on communications.

Source: Ofcom

In households with five or more people, mobile accounts for about 80 percent of spending on “telecommunications.
The other issue is whether new services will develop that generate more revenue. That is an open question at the moment. But many argue that Internet of Things apps will drive additional service revenue, while customers are showing willingness to spend more money for faster Internet connections, as well.

Of course, there are regional variations. Average revenue per user, or per account, has shrunk in Western Europe, but climbed in the U.S. market.

And there are other complications. Smartphones tend to have much higher ARPU than tablet connections. Internet of Things connections are expected to have lower ARPU than tablets. So revenue per account and revenue per connection will diverge.

In my own household, ARPU (per device) is relatively stable, but the revenue per account has grown, as more devices have been added, and bigger usage allowances have been purchased.

The point is that there is evidence of growing household spending on mobile services, and at least a pathway to new spending on new services. There will be a business model for 5G. It might take some time to develop in a clear way, though.

The other angle is that mobile networks get replaced about every 10 years, so the revenue from the older generation of networks is captured, and built upon, by the new network.

In other words, revenue shifts from older networks to newer networks, which also tend to be more efficient, and therefore less costly. That tends to be the case even under new circumstances, where legacy revenue sources are replaced by new sources.

Mobile voice and messaging revenue, for example, is dropping in many markets, but mobile Internet access revenue is growing. The issue is the magnitude of blended revenue.

Even without knowing the particulars, some of us would argue that, as a shift to gigabit access in the fixed network will lift ARPU, so too will gigabit access in the mobile network.

What is Telco Equivalent of Automaker Investments in Ridesharing?

How to deal with disruption of one’s legacy business is never as easy as it seems. Facing over the top voice competition, some telcos tried creating their own branded OTT offers, without notable success. Others tried to do so with messaging, again without notable success.


At least so far, some of the most notable “new service” initiatives have involved taking market share from other providers (telcos getting into entertainment video; cable TV companies getting into voice).


The notable exceptions have come in Internet access and mobile services. Both provide the best examples of a new service that achieved ubiquity.


Cable TV firms, on the other hand, have been successful at creating or buying content assets, which providers something of a model for an investment approach for telcos.


So far, access providers have not taken one possible approach, namely taking minority stakes in leading OTT firms.


Auto manufacturers are taking a slightly different approach, investing in the ridesharing firms the automakers believe could disrupt the “car ownership” market. That approach is akin to Microsoft buying Skype.


At least so far, though, the auto investments represent an approach telcos and other access providers have not achieved, the equivalent of AT&T and Verizon becoming minority investors in Google, Facebook and Netflix.


We might yet see something on that model in the Internet of Things arena, though. The key will be a willingness to accept minority ownership, with the corresponding lack of full control.


What we generally have not yet seen is a willingness on the part of some access providers to envision a major change in business model and ecosystem role.


Auto manufacturers actually are investing in ridesharing services because they fear the auto sales business might shrink. So they are hedging by investing in transportation services, rather than vehicle sales.


Driverless cars will be the determining factor if car ownership declines in favor of ride-hailing services, many believe, in large part because removing the need for human drivers can sharply lower costs.


The cost of owning and driving a car an average of 15,000 miles per year in the US is approximately $0.57 per mile driven, according to AAA.


UberX costs $2.15 per mile in New York City and UberPOOL costs $1.61 per mile.


However, if a ride-hailing company could cut the driver out of the equation with driverless vehicles, it could undercut the cost of owning a car.


The parallel shift, for an access provider, would be a shift towards ownership of content and application assets, rather than sales of access services.

As unlikely as that seems, it represents the hedge automakers are making with their ridesharing investments.

Automakers are redefining themselves, in part, from being in the "making cars" business to being in the "transportation" business. The issue is whether service providers can recraft at least some portion of their business from "access" to "apps."

"Wearable" Value Proposition Seems Unclear to Most

A new study of wearable adoption suggests the value proposition is a bit unclear. “Of all the users of wearables surveyed, around one in 10 said they no longer used their wearable devices, with one third of these owners abandoning them within a couple of weeks of purchase,” said Ericsson.

So, at least so far, it does not appear that wearables--especially watches--are the "next big thing."

Ericsson says abandonment rates are declining, but user expectations also are rising. A common cause of dissatisfaction is feeling tethered to their smartphone.

Among those who have abandoned wearables, 14 percent did so because wearables were lacking standalone connectivity.


In fact, 83 percent of all smartphone users surveyed expect wearables to have some form of standalone connectivity.

When asked what form they would expect this connectivity to take in the future, around 40 percent of existing wearables owners and 46 percent of non-wearables users preferred Wi-Fi and cellular connectivity, with existing users expressing a two times higher preference for built-in mobile connectivity compared to non-users of wearables.

On the other hand, the cost of an additional mobile data connection charge is a barrier to wearables use. About 33 percent non-users indicated that the cost of keeping digital devices connected is a key reason why they haven’t invested in wearable technology.

The survey also suggests consumers believe additional drivers, such as personal safety, could be future adoption drivers by 2020. Demand for a wide range of other potential use cases seems unclear, in the medium term.


As was the case for the smartphone, which functionally displaced use of other consumer devices (cameras, clocks, GPS devices), many consumers and industry executives believe wearables could replace a wide range of other existing devices.

source: Ericsson

Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...