Monday, January 6, 2014

Winners and Losers in Content

Any specific regulation or law normally produces winners and losers. Consider the impact of copyright law. You might instinctively assume that when copyright produces more revenue, the result is more content production.

Some might argue the relationship between revenue and creative output is not so simple. In other words, less copyright protection arguably can lead to more content production by at least some producers.

Broader copyright may thus entail a trade-off between two marginal effects: More original works from new authors along one margin, but fewer original works from the most popular existing authors along a second, argues Glynn S. Lunney, Jr. of the Tulane University School of Law.

If the second effect outweighs the first, then more revenue (produced by greater copyright protection) may lead to fewer original works. Conversely, less revenue (produced by less copyright protection) may lead to more original works, albeit by newer artists.

“While this may seem radically counterintuitive, it also happens to be true,” Lunney argues.  

Lunney studied the relationship between copyright protection, revenue, and creative output, by looking at file sharing and the parallel fall in music industry revenue.

Looking at songs in the top fifty of the Billboard Hot 100 from 1985 through 2013, Lunney found that the sharp decline in music industry revenue that paralleled the rise of file sharing was associated with fewer new artists entering the market, but also more hit songs, on average, by those new artists who did enter.

Moreover, because the second marginal effect was larger than the first, the decline in revenue since file sharing began was associated with a net increase in the number of new hit songs.

“Thus, for the music industry, the rise of file sharing and the parallel decline in revenue has meant the creation of more new music,” says Lunney.

Those findings will provide little comfort in some quarters. As with many other phenomena related to the Internet ecosystem, more usage does not translate directly into “more revenue” for some participants, even if it means more revenue is earned by other participants.

In other words, there are some winners and some losers.

Some Pro-Competitive Policies Just Don't Work

Political rationality and economic rationality sometimes are in conflict. In other words, public policy sometimes (perhaps often) is applied in ways that actually are counter productive, whether that is communications policy or social policy.

One example: nearly two years after the official end of the "Great Recession," the U.S labor market remains historically weak. Counter intuitively, dramatic expansions of unemployment insurance might be prolonging the problem.

In other words, not only do our efforts to ameliorate distress do very little, those efforts actually cause the problem to become worse.

To be specific, a study by the National Bureau of Economic Research suggests that
unemployment insurance extensions had significant but small negative effects on the probability that the eligible unemployed would exit unemployment, concentrated among the long-term unemployed.

In other words, UI benefit extensions raised the unemployment rate in early 2011 by about 0.1 to 0.5 percentage points.

National policies to promote competition in telecommunications markets suffer from similar dangers. What “seems reasonable” to promote consumer welfare might in fact lead to the opposite effect, namely a reduction in long term consumer welfare.

The practical example is policy affecting the number of providers in any market segment. And just how few service providers are necessary to provide meaningful competition in any segment of the telecommunications business is a thorny question.

Many observers would say the empirical evidence is fairly clear when the number of suppliers is “one or two,” based on the history of monopoly fixed network communications, or sluggish adoption, high prices and limited innovation when just two mobile service providers operated in any single market.

But there is more controversy about the minimum number of contestants required in the satellite TV segment, fixed network business and mobile business, under present circumstances.

Intermodal competition (competition from other suppliers outside the segment) is the difference. A few decades ago, one might have argued, as did U.S. antitrust regulators, that the satellite TV market would be insufficiently competitive if the two suppliers merged.

These days, satellite TV competes directly, and successfully, with cable TV and telco TV suppliers, at least for the video product. But satellite providers are at a clear disadvantage in the areas of broadband Internet access, voice and interactive services generally.

So “two becoming one” in the satellite segment might not be as challenging as in the past, in terms of impact on consumer welfare.

The other key challenge is the minimum number of service providers necessary to maintain effective or reasonable levels of competition in the core fixed network and mobile service segments.

In the fixed network access market, that minimum number today is “two.” Google Fiber will provide a key test of whether the long-term number of sustainable providers can become “three.”

In the U.S. mobile communications business, the developing issue is whether three major providers will provide sufficient sustainable competition. To be sure, there will be a common sense belief that four providers provides more competition than three providers.

That, in fact, is a common belief for regulators in some European markets.

To the extent that is true, the issue is whether competition is sustainable over the long term. Highly fragmented markets can be relatively stable over long periods of time, so long as capital intensity is low. Most packaged consumer products categories provide examples.

But access networks are capital intensive, limiting the number of viable providers over the long term, even if, in the near term, competition can temporarily support more competitors. And that’s the conundrum.

It is difficult to say what the minimum number of providers must be to provide the benefits of competition, beyond the number “one.” One is tempted to argue that “more providers” provides greater benefits than “fewer” providers.

That might even be the case, in the short term. Over the long term, sustainable competition might feature fewer competitors. The reason is simple enough: capital intensive businesses require enough profit margin to allow robust investment in the business.

Having “too many” providers in a market tends to reduce profit margins so much that no providers, at least theoretically, can earn enough to sustain themselves over the long term.

So although “more” sounds like a better recipe for competition than “fewer,” fewer might be the way to sustainable long term competitive benefits.

Sure, it sounds crazy that “fewer” competitors might produce better consumer outcomes than “more” competitors. But the problem is that a highly capital intensive business requires methods to earn enough money to build the next generation of networks. And “excessive” levels of competition might be quite detrimental in that regard.

In the end, perhaps political rationality wins, at the expense of economic rationality. But there is no reason to pretend that some policies designed to promote competition and consumer welfare actually will do so.

Some policies designed to ensure competition might actually do so at the expense of the ability to invest in the next generation of networks. In that sense, some touted pro-competitive policies might lead, in the long term, to sub-optimal consumer welfare.

Verizon Wireless, T-Mobile US Want to Swap Spectrum

Verizon Wireless and T-Mobile US have asked the U.S. Federal Communications Commission to exchange blocks of spectrum, generally on a one-for-one basis, in hundreds of U.S. counties.

Such spectrum swaps are not unusual in the mobile business. In 2012, five mobile service providers agreed to trade blocks of spectrum, acquiring spectrum from Cox Communications.

The Verizon and T-Mobile US exchanges would both firms to operate more efficiently, since after the exchanges each firm would have larger blocks of contiguous spectrum. In some cases, the additional spectrum is contiguous to spectrum each carrier already is operating.

In either case, each carrier would benefit from using larger blocks of spectrum, and in some cases also benefit from contiguous spectrum.

The moves are mostly tactical, allowing each service provider to operate more efficiently, since the deals do not change the aggregate amount of spectrum holdings of either carrier.

The FCC’s initial review of the applications indicates that, after the transaction, Verizon Wireless would hold 67 MHz to 149 MHz of spectrum and T-Mobile would hold 30 MHz to 100 MHz of spectrum in the 518 counties covering parts or all of 133 different cellular markets.

Since the swaps generally are one for one, those holdings reflect the initial amount of spectrum licenses held by each mobile service provider.

The exchanges will not affect any current subscribers of either network, and involve blocks of spectrum not yet activated by either mobile operator.

In the case of the intra-market exchanges of equal amounts of PCS spectrum, Verizon Wireless
and T-Mobile would exchange 5 MHz to 20 MHz  of PCS spectrum in 153 counties across 47 market areas, the
FCC notes.

In addition, in 11 counties across three markets in Texas, Verizon Wireless would assign 20 megahertz of PCS spectrum to T-Mobile, and would receive 10 megahertz of PCS spectrum in return.  

Also, Verizon Wireless would assign 5 to 10 megahertz of PCS spectrum to T-Mobile in an additional 34 counties across 13 market areas.

In the case of the intra-market exchanges of equal amounts of AWS-1 spectrum, Verizon
Wireless and T-Mobile would exchange 10 to 20 megahertz of AWS-1 spectrum in 285 counties across 59 CMAs.  

In addition, in the Vineland-Millville-Bridgeton, NJ market, T-Mobile would assign 10 MHz to Verizon, and would receive 20 MHz of AWS-1 spectrum.

In the Oxnard-Simi Valley-Ventura, Calif. market, as well as the Eugene-Springfield, Ore. market, T-Mobile would assign 40 MHz and  would receive 30 megahertz of AWS-1 spectrum.  

Further, Verizon Wireless would assign 10 MHz of AWS-1 spectrum to T-Mobile US in 16 counties across four markets.

T-Mobile US would assign 10 MHz to 20 MHz of AWS-1 spectrum to Verizon Wireless in 26 counties across nine markets.

The swaps reflect a rationalization of spectrum each carrier had acquired in various auctions, but do not, in and of themselves, change market dynamics in the local markets or nationally. The swaps instead allow each mobile service provider to operate more efficiently, wringing more bandwidth out of the same amount of licensed spectrum, compared to the original set of holdings.

Google Launches Connected Car Initiative

Some might argue the automobile is the most important piece of new “hardware” that will become an important platform for software and apps.

Google has teamed with Audi, GM, Honda, Hyundai and Nvidia to form the Open Automotive Alliance (OAA), a global alliance to grow the connected car business, integrating all Android-based appliances.

The connected car is projected to generate US$282 billion in 2022, created by deployment of 1.5 billion machine-to-machine connections in the sector.

“Factory-fit vehicle platforms” such as GM’s OnStar or BMW “Connected Drive” will represent about 36 percent of connections in 2022, and aftermarket application-specific devices will account for the remainder.

Some US$32 billion will be generated by devices, US$20 billion by connectivity services and US$231 billion for applications and services that make use of the M2M connectivity. In other words, as has been true of the mobile and Internet business overall, the bulk of new revenue is generated by apps and services, not devices or access.

Learn more about the OAA at openautoalliance.net.

Saturday, January 4, 2014

Will End of Smartphone Subsidies Actually Help Mobile Service Providers and Ecosystem?

It might seem self evident that smartphone subsidies are a burden for mobile service providers. If that is the case, getting rid of device subsidies should be financially helpful. 

And there is some evidence that operating income does improve when subsidies are ended.

After all, if a carrier buys devices from Apple at $660, on average, then requires consumers to pay $200 for the device, while recovering the balance of the device cost over the life of a two-year contract, then the carrier has to amortize the device cost over time, which has the effect of lowering operating income (some portion of revenue simply reflects a recovery of the upfront $460 difference between what the carrier paid for the device and the price the customer paid.


That is one reason why T-Mobile US has abandoned smartphone subsidies, and why other carriers are adding plans that achieve similar objectives.


Of course, the matter is more complicated. It is not clear how consumer behavior changes, if and when all consumers are required to pay full retail price for their devices, even when the advantage is lower monthly recurring fees for service.


Certainly many customers would find they do not want to spend $600 to $800 for a smartphone, or might not upgrade as often. Lower smart phone sales means lower data plan revenue.


The issue is how big the effect might be, though, now that a majority of users already use smartphones. Still, it is easy enough to predict that more users would shift to less-expensive devices and upgrade less often.


That would affect the fortunes of device suppliers and likely encourage suppliers to produce more models that are less costly.

That could lead to lower rates of software innovation and application development as well. Other ripple effects could include lower service provider equity prices (lower rates of revenue growth would lead to less robust retail share prices) and possibly less investment in the industry (rates of return would drop).


Perhaps it is indisputable that smartphone adoption and innovation have benefited from subsidies. The issue is how policies might change once adoption is nearly universal. 

Friday, January 3, 2014

Small Merchant Adoption of Mobile Credit Card Readers at 40%

SMBs going mobileWith the possible exception of the Starbucks mobile payment system in the end user segment, retailer use of mobile credit card readers connected to smartphones and tablets are the standout winners in the emerging mobile payments business.

In fact, some 40 percent of small and mid-sized businesses surveyed by BIA/Kelsey said they accept payments at the point of sale with a mobile credit card reader attached to a smartphone or tablet.

About 16 percent of surveyed retailers said they were planning to add this capability in the next 12 months, according to BIA/Kelsey

In fact, such payments already eclipse the amount of contactless payment activity by quite some measure. 

Although mobile POS proximity payments made up just 0.01 percent of total retail POS volume in 2012, mobile devices (smartphones and tablets) will help propel mobile payments to $5.4 billion by 2018.



Mobile Now More than 65% of All U.S. Internet Access Connections

Of 262 million U.S. broadband access connections, there were almost 65 million fixed and 64 million mobile connections with download speeds at or above 3 megabits per second (Mbps) and upload speeds at or above 768 kbps as compared to 51 million fixed and 31 million mobile connections a year earlier, according to Federal Communications Commission data.

In other words, fixed and mobile networks supply an equal number of Internet access connections 3 Mbps and faster. To be sure, mobile and fixed access services are not equivalent in cost per megabyte or size of usage allowances.

But mobile has become a significant supplier of “faster” connections. For example, of connections offering 6 Mbps or faster service, fixed networks supply about 41 million connections, while mobile networks supply about 32 million connections.

For a historically bandwidth-limited sort of network, that improvement on the mobile front is significant.

To be sure, mobile Internet access speeds are underrepresented at 6 Mbps and faster, and over-represented among connections of 3 Mbps and lower speeds. But mobile Internet connections already represented 65 percent of all Internet access connections in the United States, at the end of 2012.

In December 2012, 21 percent of reported fixed connections (19.3 million connections) were
slower than 3 Mbps in the downstream direction, 16 percent (15.2 million connections) were at least 3 Mbps in the downstream direction but slower than 6 Mbps, and 63 percent (58 million connections) were at least 6 Mbps in the downstream direction, the FCC reports.

It might not be clear from the FCC statistics, but progress, measured in terms of typical Internet access speeds, has grown surprisingly fast in the U.S. market. That might come as a shock to some.  

In fact, Internet service provider speeds have grown at about the rate you would expect for a Moore’s Law driven product. That should be a surprise, since access networks are notoriously expensive and take some time to build. That noted, from 2000 to 2012, the typical U.S. access connection speed grew by about two to three orders of magnitude.
Retail prices also now provide dramatically more bandwidth per dollar.  In fact, people now pay less for a 40 Mbps access service than they used to pay for a 512 kbps access service.

Though the FCC report does not highlight the rapid changes in access speeds, progress has been rapid.

In August 2000, only 4.4 percent of U.S. households had a home broadband connection, while  41.5 percent of households had dial-up access.

A decade later, dial-up subscribers declined to 2.8 percent of households in 2010, and 68.2 percent of households subscribed to broadband service.

Though it perhaps is understandable that people expect more, and now, a bit of perspective probably is in order.

Internet access connections that essentially double speed every three to five years, while also featuring lower prices per unit of speed, are impressive. At those rates of change, gigabit connections will be common by about 2020.




Will AI Disrupt Non-Tangible Products and Industries as Much as the Internet Did?

Most digital and non-tangible product markets were disrupted by the internet, and might be further disrupted by artificial intelligence as w...