Tuesday, May 8, 2018

IoT in Agriculture

A study of U.S. farmers and ranchers, conducted by Alpha Brown, suggests that internet of things solutions are currently being used by 250,000 farmers, mainly for livestock and cereals crops (grains).


The technology is also used on a smaller scale in other farming operations, such as dairy, vegetable, fruits and greenhouses, Alpha Brown found.

Furthermore, the study reveals that more than half of U.S farmers have an interest in buying such solutions, which reflects a market potential of 1.1 million farmers and market size of $ 4 billion a year.

Business Insider predicts that IoT device installations in the agriculture world will increase from 30 million in 2015 to 75 million in 2020, for a compound annual growth rate of 20 percent.

OnFarm, which makes a connected farm IoT platform, expects the average farm to generate an average of 4.1 million data points per day in 2050, up from 190,000 in 2014.


Furthermore, OnFarm ran several studies and discovered that for the average farm, yield rose by 1.75 percent, energy costs dropped $7 to $13 per acre, and water use for irrigation fell by eight percent.

IoT is used in agriculture to control remote instruments and sensors in order to optimize farm work (measuring light, temperature, soil moisture, rainfall, humidity, wind speed, pest infestation, soil content or nutrients, location).

Monday, May 7, 2018

Most U.S. Voice Competition is Facilities Based

A new petition by US Telecom to the Federal Communications Commission for forbearance on mandatory competitor access to unbundled wholesale network elements is not unexpected.


Big facilities owners always argue that particular rules related to mandatory wholesale access create disincentives for them to invest in next-generation facilities.


Would-be competitors virtually always want both discounts and unbundled access, as they cannot afford to build their own facilities.


Among other issues, such rules always involve balancing incentives for investment with enabling more competition. US Telecom argues that almost all competition in the voice market now is facilities-based, which was the hope of backers of the Telecom Act of 1996.


Some 60 percent of U.S. households now use mobile for voice services, while 29 percent use cable TV or other facilities providers for voice.


Of households that do buy fixed network voice, about 55 percent buy from a cable TV or other facilities-based provider.




In business markets, at year-end 2016, telco share of business- and government-grade switched access and interconnected VoiP connections had fallen to 45 percent, down from 49 percent in 2015. Trends since then are likely unchanged, with competitors continuing to gain share.

Would-be competitors always argue they cannot compete effectively without such rules. The latest petition asks for relief on copper facilities wholesale access. Mandatory access rules for optical access already have been lightened.




The argument for mandatory wholesale (in the past with high price discounts) has been that such steps were necessary to allow competitors to enter the local access market and create the basis for further investment by such firms in their own facilities.


In practice, access networks are so expensive that it often is questionable whether new competitors can entertain “full market” investment in facilities. That is why most such investment has been to support business services in dense customer areas, or optical and other access facilities deployment for consumers only in parts of a city.


The specific rules generally relate to providing voice services. In 2000 some 186 million telco switched voice lines were in service. In 2018, there will be a projected 35 million telco lines in service.


In residential markets, only 11 percent of U.S. households are projected to have an ILEC switched voice line by the end of this year, US Telecom ays.  Indeed, 60 percent of Americans will have abandoned wire line voice service entirely in favor of wireless alternatives.


Of the remaining 40 percent, a majority will obtain service from a non-ILEC- often a cable company or other provider of voice over Internet protocol ("VoiP").


Firms whose business strategies are based on access to unbundled network facilities (serving roughly two million voice lines, total) will oppose the rule changes, for obvious reasons.


Facilities owners will just as vigorously argue that the hoped-for facilities-based competition now is a reality.

If Facebook Wants to Operate a Global Satellite Internet Access Network, What Does That Tell You about Sprint T-Mobile US Strategy?

An application by Pointview Tech (said to be a subsidiary of Facebook) to test low earth orbit satellites provides some perspective on the wisdom and long-term viability of the proposed Sprint merger with T-Mobile US.

The experimental license sought by Pointview Tech suggests Facebook is contemplating a potential role as a supplier of global internet access.

In other words, the move potentially points to interest on the part of Facebook in launching its own constellation of low earth orbit satellites to provide global internet access, in competition with other proposed LEO constellations planned by SpaceX and OneWeb, Boeing and others.

More significantly, such interest points to where the connectivity and apps, content and platform businesses are going. Simply put, the value of stand-alone connectivity, apps-only or platform-only (possibly device-only) business models is questionable, going forward, for tier-one providers in any segment of the internet ecosystem.

Arguments can be made about the impact--positive or negative-- of a successful Sprint merger on competitive dynamics in the U.S. mobile market.

Some might argue that is ultimately not the point, as the stand-alone mobile market, or the broader access market, is unsustainable as a stand-alone business at the tier-one level.

If one believes the future lies in vertical integration (think of Facebook as both app, platform, content and access provider, on a global basis), then even the Sprint tie-up with T-Mobile US is ultimately a waystation on the road elsewhere.

In one sense, that is the problem with regulating when markets are convulsing.

The likely context for any antitrust or diminished competition review is likely to hinge on dynamics within the mobile market, for which there is logic. Both those firms are “mobile access only” companies.

But they operate in a market that most observers would say is moving towards vertical integration. So Google and Facebook become internet service providers, Apple becomes a chip supplier and services provider, Comcast becomes a content developer, while AT&T wants to do the same.

One might argue a Sprint merger with T-Mobile US is the first of a series of future mergers that also would have to be undertaken. But some of us might argue the better move is vertical integration by each firm, now rather than later, with firms positioned elsewhere in the value chain.

Perhaps such partners can not today be found, of course. But some might argue either asset, paired with Comcast, immediately creates a firm with assets in fixed and mobile connectivity; content and apps. That is not to say Comcast would want to do so, right now.

But you get the point. Stand-alone mobile operations, no matter how big, are not the future. At the very least, in the connectivity business, mobile plus fixed is viewed as the better model. And most would agree the future model moves toward firms with multiple roles in the ecosystem.

the market itself is changing, as access services and content and app providers actually are starting to merge vertically. The Comcast acquisition of NBCUniversal was the primordial step. AT&T’s acquisition of Time Warner would be another illustration.

Hypothetically, other entities (tier-one app providers, device or platform providers) might eventually assume major roles in the access business as well. So, eventually, how we define “the relevant market” could be quite different.

Business strategy is the big question. If, as even supporters might argue, the market is changing in ways that favor vertical integration of access and other assets (content, platform, device), a big horizontal merger is no more than an interim step.

The next consolidation would have to have the new entity merging with one or more firms in the app, platform or device areas. And a bigger mobile asset would have fewer potential partners.

Some would argue the better outcome (from a strategy and competition standpoint) would be mergers by Sprint and T-Mobile US separately with “up the stack” partners that would position each firm for the future market (beyond simple mobile connectivity).

In other words, either asset, paired with a tier-one app, platform or device supplier would make more long-term sense. Sprint combined with T-Mobile US would be a bigger supplier of mobile access, but with no fixed network assets, no content assets, no platform capabilities or app assets that create a firm positioned to compete in the coming market.

That is a judgment call, but is the logical conclusion if one argues that a “mobile-only” connectivity position is not sustainable; that a “connectivity only” strategy likewise is unsustainable and that new revenue sources elsewhere than connectivity must be found.

Both Sprint and T-Mobile US need to merge, in other words.  Just not with each other.

Is Sprint Plus T-Mobile US the Right Merger?

Arguments can be made about the impact--positive or negative-- of a successful Sprint merger on competitive dynamics in the U.S. mobile market. Additional arguments can be made about the sustainable structure of the mobile market, or whether the market will change in the future, in any case.

If so, is this the right merger for each firm?

Some might argue that competition actually is increased, as a merged entity will have the financial ability to match promotions by AT&T and Verizon; make investments in 5G; and enjoy other benefits of scale.

Others might argue that price competition likely will decrease. That is why equity analysts universally favor consolidation in the mobile market. “Less price competition” is the expected upside from the merger.

Yet others might argue that even if the number of suppliers decreases, that is the inevitable future, in any case, as Sprint and T-Mobile US are too small to challenge the other leaders if they remain independent entities.

Yet others might argue that additional competitors are coming, though not facilities-based providers, so the market might not ultimately reduce from four to three leaders.

A different class of arguments might be made about the minimum conditions for some amount of sustainable competition in the mobile market. Some believe the U.S. market cannot sustain four facilities-based suppliers. That is an empirical matter and will eventually be tested further, as new competitors enter the market.

The point is that many observers consider a three-leader market sustainable, while a four-provider market cannot be sustained.

Yet others might argue that the market itself is changing, as access services and content and app providers actually are starting to merge vertically. The Comcast acquisition of NBCUniversal was the primordial step. AT&T’s acquisition of Time Warner would be another illustration.

Hypothetically, other entities (tier-one app providers, device or platform providers) might eventually assume major roles in the access business as well. So, eventually, how we define “the relevant market” could be quite different.

Regulators taking a look at this particular transaction, though, will not have the ability to make determinations about market concentration based on those future developments, and will have to make decisions based on the “market as it now stands.”

It undoubtedly will be argued that the number of facilities-based national providers matters more than the number of mobile virtual network operators (MVNOs), because only a facilities-based supplier has the owner economics to attack pricing levels. Others will argue that the total number of potential tier-one retailers does suffice.

Yet others might argue that the relevant boost in competition will not come in the mobile arena at all, but in the fixed network realm, as 4G and 5G networks are used to displace fixed network services. That might be the case, though the relevant definition of “market” would be broader than “mobile only,” in that case. The challenge is that future competition is not the same as competition already existing in the market.

Up to this point, T-Mobile US has had no interest in that possibility, though obviously it will say it does have such interest, if only to boost its merger chances.

Some will argue the merger boosts investment in 5G. Much of that argument rests on the assumption that 5G will cost significantly more than 4G, something Verizon and AT&T already seem to dispute. And both Sprint and T-Mobile US have touted the speed with which they are independently moving to 5G networks, already.

It is reasonable that a merged Sprint and T-Mobile US could spend less than if each firm built separately, of course. In that sense (infrastructure deployment), a merger might bring more 5G competition, faster.

The more-immediate problem is that the merger will clearly increase market concentration using the traditional antitrust metrics used by antitrust officials.

Business strategy is the other big question. If, as even supporters might argue, the market is changing in ways that favor vertical integration of access and other assets (content, platform, device), a big horizontal merger is no more than an interim step. The next consolidation would have to have the new entity merging with one or more firms in the app, platform or device areas.

Some would argue the better outcome (from a strategy and competition standpoint) would be mergers by Sprint and T-Mobile US separately with “up the stack” partners that would position each firm for the future market (beyond simple mobile connectivity).

In other words, either asset, paired with a tier-one app, platform or device supplier would make more long-term sense. Sprint combined with T-Mobile US would be a bigger supplier of mobile access, but with no fixed network assets, no content assets, no platform capabilities or app assets that create a firm positioned to compete in the coming market.

That is a judgment call, but is the logical conclusion if one argues that a “mobile-only” connectivity position is not sustainable; that a “connectivity only” strategy likewise is unsustainable and that new revenue sources elsewhere than connectivity must be found.

Both Sprint and T-Mobile US need to merge, in other words.  Just not with each other.

Sunday, May 6, 2018

How Many U.S Households Really Do Not Have Multiple Suppliers of Broadband Internet Access?

One often hears it said that “only 50 percent of U.S. households have a choice of more than one broadband provider.” It is not true, unless you consider all the qualifications, such as eliminating wireless and mobile access platforms, and using definitions of "broadband" that, while accurate, are not necessarily "true."

In other words, if a user has 20 Mbps access--not "broadband"-- does that mean end user experience is not substantially the same as if that user had 25 Mbps?


What people mean when saying such things is that certain percentages of homes only have one provider of “fixed network” internet access service at particular speeds, as virtually every U.S. home has a choice of two satellite providers, in addition to fixed service, plus mobile access from three to four suppliers, and in rural areas, quite often a fixed wireless supplier.


In fact, though we might argue that the speeds and prices are not optimal, most U.S. households probably have choices for internet access ranging from five to seven, at a variety of speeds, from a variety of platforms.  


To make the argument that half of U.S. household only have one broadband provider, one has to eliminate available mobile options, satellite providers and service below certain thresholds, be that 4 Mbps, 10 Mbps, 25 Mbps or any other number.


In 2016, some 92 percent of the population has access to both fixed terrestrial services at 25 Mbps/3 Mbps and mobile LTE at speeds of 5 Mbps/1 Mbps, the FCC said in 2018, using 2016 data.


That almost certainly means access and speeds have gotten better over the last couple of years.


In rural areas, 68.6 percent of Americans have access to both services (fixed at 25 Mbps, mobile at 5 Mbps or better), as opposed to 97.9 percent of Americans in urban areas.


With respect to fixed 25 Mbps/3 Mbps and 10 Mbps/3 Mbps LTE services, 85.3 percent of all Americans have access to such services, including 61 percent in evaluated rural areas and 89.8 percent in evaluated urban areas, the FCC says.


At year-end 2016, 92.3 percent of all Americans had access to fixed terrestrial broadband at speeds of 25 Mbps/3 Mbps, up from 89.4 percent in 2014 and 81.2 percent in 2012.


It was true that, in 2016, perhaps 24 million Americans still lack fixed terrestrial broadband at speeds of 25 Mbps/3 Mbps from at least one provider. But that is not the same as saying those households do not have internet access at 10 Mbps or faster, from multiple non-wired suppliers.






The point is, everything hinges on the definition of “broadband” and the range of suppliers who sell it in any given market.



Most of the “one provider markets arguably are in rural areas.




As you would guess, most of the problems exist in rural areas.




Gates and Hastings were Right: Near-Zero Pricing Matters

The most-startling strategic assumption ever made by Bill Gates was his belief that horrendously-expensive computing hardware would eventually be so low cost that he could build his own business on software for ubiquitous devices. .

How startling was the assumption? Consider that, In constant dollar terms, the computing power of an Apple iPad 2, when Microsoft was founded in 1975, would have cost between US$100 million and $10 billion.


The point is that the assumption by Gates that computing operations would be so cheap was an astounding leap. But my guess is that Gates understood Moore’s Law in a way that the rest of us did not.

Reed Hastings, Netflix founder, apparently made a similar decision. For Bill Gates, the insight that free computing would be a reality meant he should build his business on software used by computers.

Reed Hastings came to the same conclusion as he looked at bandwidth trends in terms both of capacity and prices. At a time when dial-up modems were running at 56 kbps, Hastings extrapolated from Moore's Law to understand where bandwidth would be in the future, not where it was “right now.”

“We took out our spreadsheets and we figured we’d get 14 megabits per second to the home by 2012, which turns out is about what we will get,” says Reed Hastings, Netflix CEO. “If you drag it out to 2021, we will all have a gigabit to the home." So far, internet access speeds have increased at just about those rates.

As frightening as it might be for executives and shareholders in the telecommunications industry, a bedrock assumption of mine about dynamics in the industry is that, over time, retail prices for connectivity services also will trend towards zero.

“Near-zero pricing” does not mean absolute zero (free), but only prices so low there is no practical constraint to using the services, just as prices of computing appliances trend towards lower prices over time, without reaching actual “zero.”


Communications capacity might not be driven directly by Moore’s Law, but it is affected, as chipsets power optical transmitters, receivers, power antenna arrays, switches, routers and all other active elements used by communications networks.

Also, at least some internet access providers--especially Comcast--have been increasing internet access bandwidth  in recent years almost directly in line with what Moore’s Law would predict.

If that is the case, the long-term trend should be that speed doubles about every 18 months. Service providers have choices about what to do, but generally try and hold prices the same while doubling the speed at that same price (much as PC manufacturers have done).

In that case, what changes is cost-per-bit, rather than posted price. But an argument can be made that actual retail prices actually have dropped, as well. According to the U.S. Bureau of Labor Statistics, prices for internet services and electronic information providers were 22 percent lower in 2018 versus 2000.

That is not always so obvious for any number of reasons. Disguised discounting happens when customers buy service bundles. In such cases, posted prices for stand-alone services are one thing; the effective prices people pay something else.

Also, it matters which packages people actually buy, not simply what suppliers advertise. Customers do have the ability to buy faster or slower services, with varying prices. So even when posted prices rise, most people do not buy those tiers of service.

And promotional pricing also plays a role. It is quite routine to find discounted prices offered for as much as a year.

The point is that, taking into account all discounting methods and buyer habits, what people pay for internet access has arguably declined, even when they are buying more usage.


To be sure, the near-zero pricing trend applies most directly to the cost per bit, rather than the effective retail price. But the point is that use of internet bandwidth keeps moving towards the point where using the resource is not a constraint on user behavior.

And that is the sense in which near-zero pricing matters: it does not constrain the use of computing hardware or communications networks for internet access.

Gates and Hastings have built big businesses on the assumption that Moore’s Law changes the realm of possibility. For communications services providers, there are lessons.

As Gates rightly assumed big businesses could be built on a widespread base of computers, and Hastings assumed a big streaming business could be based on low-cost and plentiful bandwidth, so service providers have to assume their future fortunes likewise hinge on owning assets in the app, device or platform roles within the ecosystem, not simply connectivity services.

Near-zero pricing matters.

Friday, May 4, 2018

Near-Zero Pricing Forces Continue to Operate in Much of Telecom Business

One of the biggest long-term trends in the communications business is the tendency for connectivity services to constantly drop towards “zero” levels. That is arguably most true in the capacity parts of the business (bandwidth), the cost of discrete computing operations, the cost of storage or many applications.

One can see this clearly in voice pricing, text messaging and even internet access (easier to explain in terms of cost per bit, but even absolute pricing levels have declined).

The reason it often does not seem as though prices have declined is that the value keeps increasing, as retail prices drop or remain the same.

In large part, marginal cost pricing is at work. Products that are "services," and perishable, are particularly important settings for such pricing. Airline seats and hotel room stays provide clear examples.

Seats or rooms not sold are highly "perishable." They cannot ever be sold as a flight leaves or a day passes. So it can be a rational practice to monetize those assets at almost any positive price.

Whether marginal cost pricing is “good” for traditional telecom services suppliers is a good question, as the marginal cost of supplying one more megabyte of Internet access, voice or text messaging might well be very close to zero.

Such “near zero pricing” is pretty much what we see with major VoIP services such as Skype. Whether the traditional telecom business can survive such pricing is a big question.

That is hard to square with the capital intensity of building any big network, which mandates a cost quite a lot higher than “zero.”

In principle, marginal cost pricing assumes that a seller recoups the cost of selling the incremental units in the short term and recovers sunk cost eventually. The growing question is how to eventually recover all the capital invested in next generation networks.


On the other hand, we also must contend with product life cycles. As we have seen, in developed markets people use voice services less, so there is surplus capacity, which means it makes sense to allow people unlimited use of those network resources.

That was why it once made sense for mobile service providers to offer reduced cost, or then eventually unlimited calling “off peak.”

Surplus capacity caused by declining demand also applies to text messaging, where people are using alternatives. If there is plenty of capacity, offering lower prices to “fill up the pipe” makes sense. And even if most consumers do not actually use those resources, they are presented by value propositions of higher value.

Video entertainment and internet access are the next products to watch. Video is more complicated, as it is an “up the stack” application, not a connectivity service. Retail pricing has to include the cost of content rights, which have not historically varied based on demand, but on supply issues.  

Linear video already has past its peak, while streaming alternatives are in the growth phase.

Internet access, meanwhile, is approaching saturation. That suggests more price pressure on linear video and internet access, as less demand means stranded supply, and therefore incentives to cut prices to boost sales volume.

Marketing practices also play a big part, as the economics of usage on a digital network can be quite different than on an analog network. And some competitors might have assets they can leverage in new ways.

In 1998, AT&T revolutionized the industry with its “Digital One Rate” plan, which eliminated roaming and long-distance charges, effectively eliminating the difference between “extra cost” long distance and flat-fee local calling.

Digital One Rate did not offer unlimited calling at first, but that came soon afterwards. In the near term, lots of people figured out they could use their mobiles to make all “long distance” calls, using their local lines for inbound and local calling only.

With unlimited calling, it became possible to consider abandoning landline service entirely.

At least in part, the growth of mobile subscriptions from 44 million in 1996 to 182 million by the end of 2004 is a result of the higher value of mobile services, based in part on “all distance” calling.

Mobile revenue increased by more than 750 percent, from 10.2 billion dollars in 1993 to more than 88 billion dollars in 2003.


During this same time period, long distance revenue fell by 67 percent to 4.3 billion dollars, down from 13.0 billion dollars.

The point is that connectivity prices and some application (voice, messaging) prices have had a tendency to drop closer to zero over time. Moore’s Law plays a part. Open source also allows lower costs, and therefore more-competitive prices.

Optical fiber and microwave play a part in boosting capacity and lowering unit prices.  Internet protocol also helps (lower network interface costs).

Competition has had a larger impact. Regulatory cost reductions have been key in some markets.

AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...