The Federal Communications has moved to free up about 65 MHz of Spectrum on a shared basis for use by Long Term Evolution 4G networks, and the key element might be the face that the spectrum to be put up for auction using "flexible use" rules for the AWS-3 band, which includes the 1695-1710 MHz, 1755-1780 MHz, and 2155-2180 MHz bands.
The novelty here is that the licenses will not necessarily be sold on an “exclusive basis.” The new band, called Advanced Wireless Services-3 (AWS-3), would be the first shared band between commercial networks and government systems.
That way of allocating spectrum is quite new, as in the past all spectrum has been awarded either on an exclusive basis, or, in the case of Wi-Fi, on an open basis with no interference protection.
The new mode of sharing will likely allow licensees and others to share a given block of spectrum, with interference protections.
That's new.
Monday, March 31, 2014
FCC to Auction 65 MHz of Shared Spectrum for 4G
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
100 MHz of New Wi-Fi Spectrum Authorized at 5GHz
The Federal Communications Commission has moved to make 100 MHz of spectrum in the 5-GHz (5.150-5.250 GHz) band available for Wi-Fi or other uses. The move will increase the total amount of U.S. Wi-Fi spectrum by about 15 percent, some reckon.
The rules adopted today remove the current restriction on indoor-only use and increase the permissible power.
That will be useful for creation of Wi-Fi hot spots at such as airports and convention centers.
The move was expected.
The rules adopted today remove the current restriction on indoor-only use and increase the permissible power.
That will be useful for creation of Wi-Fi hot spots at such as airports and convention centers.
The move was expected.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Saturday, March 29, 2014
Content Fragmentation, Not Technology, is Barrier to Widespread Video Streaming
Content fragmentation caused by content rights agreements and release windows is among the non-technical reasons widespread video streaming replicating linear video content offerings is taking so long to reach commercial status.
Technology, as such, no longer is the issue. Instead, it is content rights that are the key barrier. It isn’t so much theatrical release, airline or hotel pay per view or release to retail sales that are the issue.
People sort of understand there is a rolling series of release windows for new movie content, and the process is relatively linear and straightforward, up to the point that the “premium” networks get their first access.
Viewers understand that movies debut in theaters, then move at some point to limited hotel and airline pay per view before their general availability on Blu-ray, DVD and digital services.
But then there is what some might call a hiccup. After about a year after theatrical release, movies can be shown on networks such as HBO, Starz and Epix.
But contracts specify that while a movie is licensed to run on such a channel, it cannot be viewed on any other channel, or on a rival streaming service.
In total, it takes five to seven years for all restrictions to expire, and any movie can be shown on any streaming services that wishes to buy the rights to do so. And HBO alone has rights to about half of all the movies released by major studios in the United States until beyond 2020.
So no streaming service can offer its subscribers “all movies.” That fragmentation will limit streaming growth for quite some time, forcing consumers to buy multiple services to get “most” video content after theatrical release.
In addition to those issues, streaming services themselves work to get exclusivity, as well. In other words, a viewer’s desire for “one service that has everything” conflicts with provider effort to gain marketing advantage by offering what no other provider can offer.
Fragmentation is inevitable, under such circumstances. It might not be elegant, but some consumers will simply buy multiple subscriptions, to get access to more content.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Will Facebook Become an ISP?
Precisely what Facebook plans to do with drones is hard to tell, as it once was hard to tell what Google might do in the Internet access area.
But there were more hints, in Google’s case, as Google had invested in a number of Internet service provider initiatives, such as metropolitan Wi-Fi, or airport Wi-Fi, or promises to bid on 4G spectrum (to put a floor under the bidding prices) to actual investments in spectrum (Clearwire).
Up to this point, Facebook has introduced “zero rating” programs in a couple of countries, allowing people to use Facebook without consuming any of their mobile data allotment.
“In just a few months, we helped double the number of people using mobile data on Globe’s network and grew their subscribers by 25 percent,” said Mark Zuckerberg, Facebook CEO. “In Paraguay, by working with TIGO we were able to grow the number of people using the internet by 50 percent over the course of the partnership and increase daily data usage by more than 50 percent.”
Facebook promises other such partnerships will be launched. But not only partnerships, perhaps. “But partnerships are only part of the solution,” Zuckerberg says. To be sure, that is the sort of statement easy to take out of context.
In context, Zuckerberg only is saying Facebook will work to develop new Internet access methods. “To connect everyone in the world, we also need to invent new technologies that can solve some of the physical barriers to connectivity,” said Zuckerberg.
That only suggests Facebook will look to create new forms of access, not necessarily that Facebook will become the access provider.
Already, Facebook says it is working on mesh networks for cities, drones for medium-density areas and satellites for low-density areas. Since Facebook acknowledge that prices are mostly the issue in 80 percent to 90 percent of cases, including all urban areas, one might ask why Facebook is working urban area coverage at all.
In medium-density areas, where drones might be used, Facebook could in principle simply license or promote such technology to other ISPs. But what if other ISPs refuse? What if other ISPs move too slowly?
As for satellite access, Facebook notes that it is expensive to launch and use satellites, but getting cheaper. Facebook says it is looking at both low earth orbit and geostationary approaches, with free space optics.
“One major advantage of aerial connectivity, however, is that deployment to people’s homes is
relatively simple,” says Zuckerberg. “Relatively cheap devices already exist that can receive signals from the sky and broadcast Wi-Fi to mobile phones.”
Facebook might at the moment prefer only to push the Internet access process faster by commercializing new access networks. But the act of creation can change the realm of possibility. What might not have been viewed as desirable, initially, might look quite reasonable, in the end.
And, in any case, what actor would want to broadcast its intentions in such a matter? What advantage would Google have gained had it said “we are going to become Internet service providers?
Sure, it always is possible Facebook will create some new access platforms, and then simply encourage others to use them. But that seems only one of a few likely scenarios. And one of those scenarios includes Facebook becoming a supplier of end user Internet access.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Friday, March 28, 2014
Perhaps Half of High-Speed Access Consumers Would Pay for "Assured" Speed
A poll of U.K. high speed access consumers contains what is probably good news and bad news for Internet service providers who believe quality-assured speeds would be attractive for their consumers, compared to today's more uncertain offers, where, for a variety of reasons, all an ISP can say is that speeds "up to X" are possible.
It all depends on how many other users are on the network at once, and what they are doing.
The new poll by Think Broadband suggests that perhaps half of consumers might be interested in a speed guarantee, and would pay something extra for such guarantees.
The perhaps not so good news is that those who said they would be willing to pay also indicated they would spend about $5 to $8 a month for the feature. To be sure, a price premium of that sort would be helpful for ISPs, even if it were to be a feature purchased only by 20 percent of consumers.
The other problem, of course, is that even when an ISP can control contention on its own access links, it cannot do so for the rest of the ecosystem. That means such an offer would have some caveats and limitations that might make the offer less appealing.
The other issue is whether an ISP can even explain, to most consumers, why an offer is conditional, and what the speed guarantee actually provides.
It all depends on how many other users are on the network at once, and what they are doing.
The new poll by Think Broadband suggests that perhaps half of consumers might be interested in a speed guarantee, and would pay something extra for such guarantees.
The perhaps not so good news is that those who said they would be willing to pay also indicated they would spend about $5 to $8 a month for the feature. To be sure, a price premium of that sort would be helpful for ISPs, even if it were to be a feature purchased only by 20 percent of consumers.
The other problem, of course, is that even when an ISP can control contention on its own access links, it cannot do so for the rest of the ecosystem. That means such an offer would have some caveats and limitations that might make the offer less appealing.
The other issue is whether an ISP can even explain, to most consumers, why an offer is conditional, and what the speed guarantee actually provides.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Thursday, March 27, 2014
Amazon to Launch Ad-Supported Video Streaming Service?
It appears Amazon is considering launching an ad-supported streaming video service, a move that would complement its Amazon Prime service, and provide another source of content for the expected Amazon video streaming dongle Amazon might launch in April 2014.
That device will compete with the Apple TV, Google Chromecast and Roku boxes.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Wednesday, March 26, 2014
Consumer Satisfaction With Fixed Network Services Creates Opportunity for Attackers
U.S. consumers appear to have wide differences in “satisfaction” with triple-play services they buy from some service providers, compared to others, according to Consumer Reports. Polling 81,848 customers of fixed network services, Consumer Reports found Verizon's FiOS was near the top of the rankings in every category, while AT&T Inc.'s U-verse was in the middle.
Comcast's TV service ranked 15th out of 17 providers, while Time Warner Cable's was 16th.
Comcast and Time Warner Cable also were in the bottom half of phone and Internet service providers and among the 14 firms selling triple-play services, according to Bloomberg.
Though Verizon executives might be pleased, the industry as a whole ranks at or quite close to the bottom in consumer satisfaction among all industries. Of 43 industries tracked by the American Consumer Satisfaction Index, for example, Internet service providers rank 43rd.
Linear video service providers rank 41st. Even mobile service providers ranked no better than 39 out of 43.
The best-scoring industry were the TV and credit union industries, both scoring 85, while the ISP industry scored 65.
To be sure, “customer satisfaction” is not a foolproof proxy for “loyalty.” In many cases, “unhappy” customers will not change suppliers. In other cases, even “happy” consumers will churn.
You can probably imagine instances where even unhappy customers will not change suppliers. They might believe all the suppliers are roughly the same. On the other hand, you can probably imagine scenarios where even happy customers will change providers, as when one provider offers the “same quality at a lower price.”
But there are signs executives should be concerned. For starters, as Verizon’s performance shows, consumers are capable of perceiving quality differences that result in higher satisfaction.
To the extent that higher satisfaction is related to lower churn, satisfaction will matter.
The other issue is that “value” appears to be a growing source of pain for subscribers to linear video entertainment packages, and “prices” would seem to be part of the reason for the dissatisfaction.
One might surmise that other issues, such as slow speeds and congestion, as well as outages, account for the low satisfaction scores for ISPs.
But value related to price is likely a bigger issue for linear video subscription services.
If the average monthly cost of a triple-play bundle is $154, then the annual cost is $1,848, more than the average household spends on clothing, furniture, or electricity, according to Consumer Reports.
Given that “value” is a big issue, bundles that save money should help. They do, but even buyers of bundles seem “unimpressed with what they were getting for their money,” Consumer Reports says.
“Even WOW and Verizon FiOS, which got high marks for service satisfaction, rated middling or lower for value, and out of 14 providers, nine got the lowest possible value rating,” says Consumer Reports.
To be sure, some industries just have a harder time in the “customer satisfaction” area. Airlines and fixed line communications and video entertainment providers traditionally do not score high in consumer satisfaction surveys. The fact that both those industries are susceptible to service interruptions might explain the ratings.
To be sure, outages are certain to cause unhappiness. In the case of high speed access, slowdowns caused by congestion likewise are going to reduce satisfaction with the product.
If high speed access and video entertainment are the foundations for tomorrow’s fixed network revenue streams, such unhappiness is a danger. To be sure, most linear video service providers now operate much more consistently, with fewer outages, than in the past.
And one might argue that the more-reliable performance has lead to higher satisfaction, something that seems to be true especially for satellite video providers and Verizon’s fiber to home service.
Perhaps oddly, ISPs scored lower than fixed line voice providers for satisfaction. One reason for that finding might well be that unhappy fixed network voice customers already have left, while remaining customers are using the service less and less.
Also, users of fixed line voice services might not be as aware of outages as they are with video services.
A television user might have the service in use five to seven hours a day, and certainly will know immediately if there is an outage.
In contrast, a fixed network voice user might experience many outages, but never know.
Still, the potential dangers for incumbent triple-play providers is obvious.
If consumers consistently are dissatisfied with linear video services and even high speed access, and if the fixed network business is built on those two services, then the danger of new competitors entering the market is high.
Almost nothing is more attractive for a would-be entrepreneur than large markets with high gross revenues, served by competitors who are disliked to a great extent by their customers.
Google Fiber represents the realization of the threat in the high speed access business, while steaming video represents the danger to the linear video business.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Tuesday, March 25, 2014
How Revenues Can Grow Even in Midst of a Price War
A key caveat for all economic predictions is that something will happen, ceteris paribus (all other things being equal).
In real life, almost nothing is ever equal. In fact, the act of change itself changes the environment, leading to changes in behavior and outcomes.
Likewise, it is difficult to identify the specific act of one change, such as T-Mobile US launching a pricing and packaging assault, when multiple other changes also are occurring, such as fast-growing demand for mobile data services, addition of new classes of devices to connect and key changes in retail packaging of multi-user plans.
And that is why there actually can be disagreement about whether any such mobile pricing war actually is occurring.
Customers are switching to T-Mobile US based on T-Mobile US prices. If the other carriers respond, the T-Mobile US attack becomes increasingly more difficult.
In real life, almost nothing is ever equal. In fact, the act of change itself changes the environment, leading to changes in behavior and outcomes.
Likewise, it is difficult to identify the specific act of one change, such as T-Mobile US launching a pricing and packaging assault, when multiple other changes also are occurring, such as fast-growing demand for mobile data services, addition of new classes of devices to connect and key changes in retail packaging of multi-user plans.
And that is why there actually can be disagreement about whether any such mobile pricing war actually is occurring.
Some will look at the numbers and conclude there is no mobile price war under way in the United States, despite the many changes in retail packaging and pricing we have seen over the last year.
In fact, some note, mobile service provider revenues are growing, at least in the U.S. market, and that the amount of an average monthly bill also is rising.
Average monthly revenue per postpaid customer across the industry rose 2.2% to $61.15 in the fourth quarter, according to New Street Research. That is up more than $5 per user from the first quarter of 2010, when the same measure was at $55.80.
But T-Mobile US fourth-quarter and full-year 2013 results might strongly suggest there is indeed a price war going on.
In the fourth quarter, T-Mobile US posted revenues of $6.83 billion, compared with revenues of $6.19 billion in third quarter of 2012.
Adjusted earnings, though, fell from $1.36 billion to $1.24 billion. T-Mobile US average revenue per user also slipped.
Average revenue per user for T-Mobile US branded postpaid customers slipped sequentially from $52.20 to $50.70. In other words, as customers adopt T-Mobile’s lower cost plans, ARPU drops and that makes it harder to boost earnings.
For AT&T, one might note that fourth quarter 2013 results showed AT&T lagging substantially behind Verizon Wireless, while the pace of net new customer additions dipped, year over year.
So many would say those developments are signs that a price war has not broken out in the U.S. mobile market, and that the price war is an illusion.
Such sentiments might be scoffed at, among executives at the leading U.S. mobile service providers, who seem to be pouring lots of effort into recrafting offers and retail packaging to parry T-Mobile US attacks.
Equity analysts generally do believe a marketing war is underway, and will destabilize revenues for several of the providers, if not all.
As so often happens in the communications business, multiple trends operate at once. Retail promotions offered by the leading mobile service providers are changing formal price points, even if consumers are acting in ways that do not affect recurring revenue all that much.
But U.S. mobile revenues have been a bright spot, so a marketing price war and growing revenues are not strictly and necessarily mutually incompatible. A price war could weaken either gross revenue or profit margins, or both, but in the context of a still-growing market might not necessarily lead to lower overall revenues.
In other words, a price war can exist even in aggregate market revenue grows. But the growth arguably is less than might have been the case in the absence of the price competition.
To be sure, one might argue that some of the price war is illusory. One example is the separation of device purchases and creation of new device installment plans from recurring service fees. On a formal basis, monthly costs drop when device subsidies are removed.
But many consumers choose to finance their devices using installment plans which, in aggregate, are roughly revenue neutral for the carrier offering the plans.
On the other hand, revenue is but one part of the carrier bottom line. If marketing costs rise, or churn increases, then even growing gross revenues might result in less robust returns on the bottom line.
Also, there is a secular (structural or long term trends) change underway, in addition to cyclical developments, thus changing end user demand in contradictory ways.
Higher smartphone adoption leads to higher data plan purchases, growing average revenue per device and account even if cyclical promotions might cut prices for many consumers.
Sooner or later, in a saturated market, once a player decides to operate at a lower operating margin, it triggers the value destruction in the industry, which can be sometimes devastating to incumbents, argues mobile analyst Chetan Sharma.
Reliance in India in 2002 introduced “really low cost” voice plans that hit industry revenues overall.
In 2012, Free Mobile (owned by Illiad) in France decided that it doesn’t need to operate at 30 percent to 40 percent margins and could exist, long term, with 20 percent margins.
Perhaps the evidence for T-Mobile US damaging the other leading carriers is not clear cut. AT&T’s revenue increased five percent and the average revenue per user was stable.
Verizon saw a six percent increase in revenue in 2013. Sprint revenue was flat, but Sprint also was in the midst of a major network revamp that limited its marketing in significant ways.
Customers are switching to T-Mobile US based on T-Mobile US prices. If the other carriers respond, the T-Mobile US attack becomes increasingly more difficult.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Monday, March 24, 2014
If There is No More Beachfront, Users Have to Share the Beach: the Argument for Flexible Spectrum Sharing
Spectrum valuable for the same reason beachfront property is valuable: "they aren't making any more of it."
In other words, if mobile and untethered spectrum demand grows by 1,000 times over the next decade, as many assume it will, there is precious little unallocated spectrum that can be put to use.
Indeed, there is growing recognition in the U.S. communications policy community that the big potential gain in useful communications spectrum will have to come from more efficient use of spectrum already allocated, but under used.
Though in principle it might be possible to move existing licensees from their current frequencies to new spectrum, the cost to do so generally is quite high, and the time to make the changes generally long.
So there is new thinking about ways to share existing spectrum, without the need to move existing users. There also is new thinking about how to manage interference in a decentralized and efficient way, without relying on slow, cumbersome, expensive adjudication by FCC rule makings.
“Today’s great spectrum policy challenge is thus to maximize the value that can be derived from bands already in use,” say the authors of Unlocking Spectrum Value through Improved Allocation, Assignment, and Adjudication of Spectrum Rights, written by Pierre de Vries, Silicon Flatirons Center senior fellow and co-director of the Spectrum Policy Initiative, and Philip Weiser, University of Colorado Law School dean.
And, as a practical matter, there is no way to do so efficiently is to create a new framework for the decentralized management of spectrum, the authors argue.
The authors suggest “command and control” regulation of communications spectrum be replaced by a system allowing spectrum users to directly negotiate coexistence and spectrum agreements without government regulators having to act as gatekeepers.
New ways to manage potential interference and then adjudicate interference disputes would be part of the framework, largely because “a lack of clarity concerning interference
prevention between neighboring spectrum users and an inadequate system for allowing trades and resolving disputes between users” are primary reasons why spectrum is inefficiently used.
Claims of harmful interference between systems are at the heart of disputes about whether a user’s rights have been violated, or, alternatively, whether a user has lived up to its responsibilities to tolerate reasonable levels of interference, the authors note.
So any decentralized, fast-acting system would require clear methods to identify when harmful interference (not simply some interference) has occurred, and a mechanism for judging whether such claims have merit, and addressing the claims.
The three-part plan would create “harm thresholds” that are clear, allowing devices and users to tolerate some amount of interference, but also specify clear signal level impairments that create the basis for action against an infringing party.
In other words, there would be some mutually agreed upon interference that does not compromise a licensed holders rights and ability to use spectrum. Under that threshold, a license holder would not have sufficient cause for action against another party sharing spectrum.
The rationale there is that not all interference is debilitating, and much time and expense would be removed if all parties knew exactly what the limits were.
But the harm claim thresholds also would specify what in-band and out-of-band interfering signal levels would trigger a claim of harmful interference, and the ability to seek a remedy.
A second requirement is to create a more-liquid market by allowing licensees to negotiate efficiently with holders of neighboring blocks of spectrum. Right now the process of creating operating rights between spectrum neighbors is cumbersome, expensive, politically charged and slow.
A second challenge for spectrum regulation is to overcome the collective action problem that stems from band fragmentation.
At present, it is cumbersome and expensive for licensees to negotiate with spectrum holders in neighboring spectrum bands to deal with potential interference issues. One reason is that the FCC and National Telecommunications and Information Administration are required to handle any such conflicts.
Such disputes would be resolved faster, at less cost, if the parties could efficiently negotiate with each other.
The third element is creation of an adjudication mechanism capable of acting faster than the FCC now can act, without the need to rely on “rule making” processes.
The proposal would allow more-flexible clearing of spectrum for shared use, including both
exclusive, tradable and flexible use licenses assigned by auction (mobile services, typically) and open access or “unlicensed” regimes that allow unlicensed flexible use.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Fiber to Home Momentum has Changed Significantly Last 2 Years, Expert Says
Blair Levin, former Federal Communications Commission chief of staff to Reed Hundt, also was the executive director of the the National Broadband Plan effort, issued about four years ago.
With the caveat that not everybody agrees the drafting of a “national plan,” by any country, necessarily means very much, Levin, an experienced “inside the Beltway” operator well versed with the politics of communication policy, has an interesting take on progress in the U.S. market, after release of the plan.
There are four areas Levin says are important for estimating progress. “One is, are you driving fiber deeper?” Levin says. Also “are you using spectrum more effectively?”
Third, “are you getting everybody on?” Levin says. Finally, “are you using the platforms to deliver public goods more effectively?”
As you might guess, Levin thinks progress has been uneven. “It's mixed on all of them,” Levin said.
But Levin is surprised by the progress in the area of “driving fiber deeper.” As recently as two years ago, Levin says he would not have said progress was not being made in that area.
Now, Levin thinks we are making progress, and that ISPs are driving fiber deeper into their networks. One might credit Google Fiber for much of that progress, simply because it is disrupting the market with symmetrical gigabit network services, sold for a market-destabilizing $70 a month, on the back of its own networks.
That Levin, no casual observer of broadband and communications policy, thinks something has changed for the better in terms of optical fiber deployment, over the last two years, is as clear a testament to Google Fiber’s impact as anything else you might point to, except for the growing number of incumbent ISPs willing to build gigabit networks in multiple markets.
Gary Kim has been a digital infra analyst and journalist for more than 30 years, covering the business impact of technology, pre- and post-internet. He sees a similar evolution coming with AI. General-purpose technologies do not come along very often, but when they do, they change life, economies and industries.
Subscribe to:
Posts (Atom)
DIY and Licensed GenAI Patterns Will Continue
As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....
-
We have all repeatedly seen comparisons of equity value of hyperscale app providers compared to the value of connectivity providers, which s...
-
It really is surprising how often a Pareto distribution--the “80/20 rule--appears in business life, or in life, generally. Basically, the...
-
One recurring issue with forecasts of multi-access edge computing is that it is easier to make predictions about cost than revenue and infra...