Friday, December 6, 2019

What if Bandwidth Were Free?

The "original insight" for Microsoft was the question: "What if computing computing were free?" It might have seemed a ludicrous question. Young Bill Gates reportedly asked himself what his business would look like if hardware were free, an astounding assumption at the time. 

“The mainframe we played tic-tac-toe on in 1968, like most computers of that time, was a tempermental monster that lived in a climate-controlled cocoon,” Gates wrote in his book The Road Ahead. “When i was in high school, it cost about $40 an hour to access a time-shared computer using a teletype.”

Owning a computer was an impossibility, as they cost several millions of dollars. In 1970, a computer cost perhaps $4.6 million, as there only were mainframes. Importantly, when Micro-Soft was founded, Gates concluded that the cost of computers would drop so much that the cost of the hardware was not a barrier to using them. 

In 2004, Gates argued that “10 years out, in terms of actual hardware costs you can almost think of hardware as being free. I’m not saying it will be absolutely free, but in terms of the power of the servers, the power of the network will not be a limiting factor.”

You might argue that is a position Gates adopted recently. Others would argue that has been foundational in his thinking since Micro-soft was a tiny company based in Albuquerque, New Mexico in 1975. But prices did, in fact, tumble. 

In 1972, an HP personal computer cost more than $500,000. In inflation-adjusted terms, an Apple II computer of 1977 would have cost $5,174, for example. 

Microsoft's newer "insight question" was: "What if digital communication were free?" It's the same scenario, only this time it applies to the capacity to move data--audio and video as well as text--from one point to another. The technical term is bandwidth, and many other companies share the "insight." Says Intel CEO Andy Grove, for example, "If you think PC prices have plummeted, wait till you see what happens to bandwidth. 

As much as telecom executives might rue the observation, bandwidth is approaching the point where its use does not impede creation and use of applications, no matter how bandwidth-intensive they might be.

Can Telcos Capture New Platform Revenue?

Some observers now believe that about 70 percent of new value created through “digitalization” over the next decade will be based on platform-enabled, ecosystem-based business models, according to the World Economic Forum. 

That should raise questions about how much of that economic activity can be captured by telcos that mostly operate in non-platform markets. Basically, telecom is a “pipe” business, not only related to common parlance about selling connections, but also because of the “direct to customer” sales model. 

Telecom is not the only business or industry where debates about business strategy include the issue of “pipes versus platforms.” In fact, almost all businesses use a “pipe” model: they source and create products sold to customers. Firms create products, push them out through various distribution systems for sale to customers. Value is produced upstream and consumed downstream. 

Virtually all consumer goods use a pipe model, as does manufacturing, media, most software products and education. 

Platforms are different. Unlike pipes, platforms do not just create products and sell them. Platforms allow users to create and consume value as well. When external developers can extend platform functionality using application platform interfaces, that usually suggests a platform model could exist. 

Another way of stating matters is that, on a platform, users (producers) can create value on the platform for other users (consumers) to consume. Think of YouTube, Wikipedia, Amazon, Uber or Lyft. 

The business implications can be profound. Some attribute Apple’s rise to prominence in the phone industry not on its design, its user interface or operating system features but to its creation of an ecosystem and platform

In fact, the ability to generate revenue from acting as an intermediary or marketplace for different sets of market participants is the functional definition of whether some entity is a platform, or not. 

That can be glimpsed in service provider video subscription businesses, where revenue is earned directly from subscribers, but also from advertisers and in some cases from content suppliers. It is the sort of thing eBay must do, daily, in a more direct way. 


To be sure, there are some differences between the traditional app provider platform and any possible connectivity provider platform. For starters, by definition, app platforms tend to be asset light. No surprise there. Software-based businesses almost always are less asset intensive than most other physical businesses. 

In general, however any “pipe” business that sells a set of products directly to its customers will tend to require more owned assets than a software business that operates in an ecosystem. 


At a high level, when executives and professionals in the connectivity business talk about dumb pipes, they almost always refer to commodity product business models selling undifferentiated carriage or delivery of bits. But there are other senses in which the “pipe” model also matters. 

If one believes that prices for telecom products are destined to keep declining, or that more for the same price is the trend, then there are a couple of logical ways to “solve” such problems. 

Connectivity providers can create and sell new or different network-based products or shift into other higher-value and different parts of the product ecosystem. That is one way to escape the trap of marginal cost pricing, which might be the industry’s existential problem

But it is not clear whether telcos can create platform business models, and if so, where and how. The traditional connectivity business seems destined to remain a pipe (create products sold direct to consumer) model. There are glimmerings, though.

Some service providers who now are video subscription providers can create an advertising venue or marketplace once the base of subscribers grows large enough. So that provides one example. 

Some data centers work on creating marketplaces or exchanges that enable transactions beyond cross connects, even if the revenue model is indirect (marketing potential, lower churn, higher tenancy, greater volume) 

So far, connectivity providers are mostly thinking about what could emerge with edge computing, beyond the pipe revenue model (selling compute cycles or storage). Some might envision a potential role in one or more internet of things use cases such as automobile IoT or unmanned aerial vehicle networks. 

Still, it never is easy for any company, in any industry to create a platform model, even if many would prefer it over a pipe model. 

But the potential path forward seems logical enough. The historic path to create a platform has often involved sale of some initial direct product, sold to one type of customer, before becoming the foundation for creation of the marketplace or platform that creates new value for different sets of customers. 

That strategy might be called stand-alone use, creating a new market by directly satisfying a customer need, before a different two-sided or multi-sided market can be created, where at least two distinct sets of participants must be brought together, at the same time, for the market to exist. Virtually any online marketplace is such a case. 

Others might call it single-player. OpenTable, which today has a marketplace revenue model, originally only provided a reservation system to restaurants, operating in a single-sided market mode, before it then could create a two-sided model where restaurants pay money for the booked reservations made by consumers. 

The first million people who bought VCRs bought them before there were any movies available to watch on them. That might strike you as curious, akin to buying a TV when there are no programs being broadcast. 

In fact, though commercialized about 1977, it was not firmly legally established that sales of VCRs were lawful until 1984, when the U.S. Supreme Court ruled that Sony could sell VCRs without violating copyright law, as Hollywood studios alleged. 

So what were those people doing with their VCRs? Taping shows to watch later. Time shifting, we now call it. Only later, after Blockbuster Video was founded in 1985, did video rentals become a mass market phenomenon. 

So here is the point: quite often, a new market is started one way, and then, after some scale is obtained, can develop into a different business model and use case. 

Once there were millions of VCR owners, and content owners lost their fear of cannibalizing their main revenue stream (movie theater tickets), it became worthwhile for Hollywood to start selling and renting movies to watch on them. 

Eventually watching rented movies became the dominant use of VCRs, and time shifting a relatively niche use. 

So OpenTable, which operates in a two-sided marketplace--connecting restaurants and diners--started out selling reservation systems to restaurants, before creating its new model of  acting as a marketplace for diners and restaurants.

The extent to which that also will be true for some internet of things platforms is unclear, but likely, even for single-sided parts of the ecosystem. 

The value of any IoT deployment will be high when there is a robust supply of sensors, apps, devices and platforms. But without many customers, the supply of those things will be slow to grow, even in the simpler single-player markets. Just as likely, though, is the transformation of at least some of the single-player revenue models to two-sided marketplaces. 

In other words, a chicken-and-egg problem will be solved by launching one way, then transitioning to another, more complicated two-sided model requiring scale and mutual value for at least two different sets of participants. In a broad sense, think of any two-sided market as one that earns revenue by creating value for multiple sets of participants.

Amazon makes money from product sellers and buyers, while at the same time also earning revenue from advertisers and cloud computing customers. 

Telcos have faced this problem before. 

Back in the 1870s and 1880s, when the first telephone networks were created, suppliers faced a severe sales problem. The value of the network depended on how many other people a customer could call, but that number of people was quite small. The communications service has a network effect: it becomes more valuable as the number of users grows. 

These days, that is generally no longer the case. The number of people, accounts and devices connected on the networks is so large that the introduction of a new network platform does not actually face a network issue. The same people, devices and accounts that were connected on the older platform retain connectivity while the new platform is built. 

There are temporary supply issues as the physical facilities are built and activated, but no real chicken and egg problem. 

It remains to be seen whether some connectivity providers also will be able to create multi-sided (platform) markets for  internet of things or other new industries. 

The initial value might simply be edge data center functions. Later, other opportunities could arise around the use of edge computing, the access networks, customer bases and app providers. It would not be easy; it rarely is. But creating new revenue streams for some customers who just want edge computing cycles could create foundation for other revenue streams as well. 

The point is that it is not so clear telcos will reap much of the bounty.

Hard to Separate Edge Computing, 5G, Optical Networking, AI and VR

In terms of evaluating potential impact, it is becoming very difficult to separate 5G and optical networking from edge computing from applied artificial intelligence as platforms to create new use cases and applications driving value for end users and enterprises as well as revenue opportunities for computing and communications providers. 

Consider the new industry forum, the Innovative Optical and Wireless Network Global Forum, founded by Nippon Telegraph and Telephone Corporation, Intel Corporation and Sony Corporation.

The global forum’s objective is to accelerate the adoption of a new communication infrastructure that will bring together an all photonics network infrastructure including silicon photonics, edge computing and connected computing.

That includes artificial intelligence, dynamic and distributed computing, as well as digital twin computing (a computing paradigm that enables humans and things in the real world to be re-created and interact without restrictions in cyberspace).

For a few decades now, digital simulations and environments  have been touted as new ways to use “virtual worlds” to support real world commerce. Some might recall SimCity, a 1989 videogame. So virtual reality is not essentially new. Digital twinning arguably is new. 

The concept of creating virtual representations of real worlds now is referred to by some as digital twin computing, and applies mostly in industrial settings. A digital twin is a dynamic, virtual representation of a physical asset, product, process, or system, human or non-living entity.

Such digital representations model the properties, condition, and attributes of the real-world counterpart, and originally were useful for simulating the capabilities of machine tools in a safe and cost-effective way, as well as identifying the root causes of problems occurring in physical tools or infrastructure. 

If a physical machine tool breaks down or malfunctions, engineers can evaluate the digital traces of the digital twins’ virtual machines for diagnosis and prognosis.

Some believe the use of digital twinning, creating virtual replicas of physical objects and devices, eventually including humans, will be an important outcome of network and computing advances

Twinning, AI in general and virtual reality, in turn, will often going to be related to edge computing. “Forty-three percent of AI tasks will be handled by edge computing in 2023,” said Kwon Myung-sook, CEO of Intel Korea, during a forum in Seoul. “AI devices empowered with edge function will jump 15-fold.”

Azure now allows users to invoke supercomputer instances from their desktops. Nvidia markets an edge computing platform incorporating AI, for example. 

Google Fiber Now Sells Gigabit Internet Access as its Sole Offer

When Google Fiber launched its symmetrical gigabit internet access service, typical U.S. fixed network internet access speeds might have averaged about seven to nine megabits per second. In retrospect, that was about the time that U.S. access speeds began a nonlinear ascent, powered by U.S. cable operator speed boosts. 

It is worth noting that fiber-to-home average speeds were not much better, about 7.7 Mbps downstream, with cable hybrid fiber coax average speeds in the 5.5 Mbps range and digital subscriber line lagging with an average speed of about 2.2 Mbps downstream. 

So Google Fiber, offering a symmetrical gigabit connection for $70 a month, was quite disruptive, in terms of reshaping expectations about speed and price per Mbps. 

Of course, perhaps we should not have been surprised at the growth of internet speeds. Back when dial-up modems were running at 56 kbps, Reed Hastings, Netflix CEO,  extrapolated from Moore's Law to understand where bandwidth would be in the future, not where it was “right now.”

“We took out our spreadsheets and we figured we’d get 14 megabits per second to the home by 2012, which turns out is about what we will get,” says Reed Hastings, Netflix CEO. “If you drag it out to 2021, we will all have a gigabit to the home." 

So far, internet access speeds have increased at just about those rates. So what is the meaning of Google Fiber dropping its lower-speed tiers and selling only one service: a symmetrical gigabit service for $70 a month?

There are several likely reasons for the switch. The most obvious is that Google Fiber now routinely competes against cable TV operators offering speeds of a gigabit at prices that are not so different from Google Fiber, in real terms, once bundles and other promotions are included. 

Just as important, Google Fiber’s 100-Mbps service, sold for $50 a month, might actually be more expensive than some cable offers when internet access is purchased as part of a bundle. So one reason for streamlining is simply that the 100-Mbps offer is not getting traction, because it no longer offers value, compared to standard cable TV internet access offers, though it still often does when comparing telco services using copper access plant. 

But Google Fiber does not, as a practical matter, compete against slow DSL, but against cable TV services, as cable has about 66 percent of the installed base of customers, on average, across the United States, and has been from telcos continuously since perhaps 1999. 

Over the last 20 years, it would be hard to find a single year where cable broadband net account gains were not about 60 percent to 70 percent of all net gains, and over the last decade virtually all the net gains. 

To be sure, only about four percent of U.S. fixed network internet access customers seem to buy a gigabit per second service from any internet service provider. Altogether, some 77 percent of U.S. households buy internet access running between 50 Mbps and 300 Mbps.

In fact, 100 Mbps might have become the average U.S. downstream speed in 2018. 

The point is that Google Fiber and other independent ISPs selling gigabit per second service offer the most-distinctive value proposition as providers of symmetrical gigabit services, priced around  $70 to $90 per month. Comcast generally prices stand-alone gigabit service between $105 and $140 a month. 

Comcast has said that 75 percent of its customers now buy services operating at 100 Mbps or faster

The point is that Google Fiber no longer is the sole gigabit offer in most U.S. urban or suburban markets and likely finds few customers interested in its 100-Mbps offer. Google Fiber’s uniqueness in most markets is its symmetrical gigabit offer, since cable services remain asymmetrical. 

Thursday, December 5, 2019

By 2023, 1/2 of Hyperscale Data Center Capex Might be Going to the Edge

Alphabet, Alibaba, Amazon, Apple, Baidu, Facebook, Microsoft, Rakuten and Tencent, which operate the biggest hyperscale data centers, also now are making initial moves to “own the edge” as well. Though in 2019 they might collectively spend about five percent of total capex on edge computing, by 2023 they could be devoting as much as 50 percent of capex on edge computing, according to a forecast by Technology Business Research. 

The obvious question for connectivity providers is whether, and how much, new potential “partnerships” with some of the hyperscale providers might represent value and revenue for connectivity providers. 

Up to a point, edge computing facilities will scale faster if partnerships can be struck between ecosystem participants (central office owners, cell tower owners, connectivity providers and data centers, for example). 

The tricky task is striking deals that spread revenue in ways seen as fair by the participants. And, of course, eventual channel conflict seems somewhat inevitable. That noted, the multi-cloud trend and market for edge data center space and support (from racks to compute cycles) might create several niches. 

AWS already has signed Verizon, Vodafone, KDDI and SK Telecom as access partners, and also appears to be leasing data center space from the carriers. So, at the very least, it appears each of the telco partners will earn some data center leasing revenue from AWS, which is placing its servers in telco racks (presumably often in central offices). 

The telco partners likely also gain some amount of dedicated access (capacity) revenue from AWS and any enterprise customers using the AWS edge computing service. 

There is some potential indirect benefit, as often is the case, even from partnerships with little direct incremental revenue, as the availability of AWS edge computing services makes the connectivity service more valuable. 

The AWS partnership does provide near-term value in allowing each of the telco partners to tout their presence in the edge computing market, at low investment costs to them. 

On the other hand, it is hard to see much incremental revenue upside from this sort of partnership. As always, the biggest revenue and profit returns will flow to providers that own the infrastructure and the services, as always is the case for owner’s economics.

Still, when markets are young, revenue upside is likely small and investment costs high, that “creep in” at low cost strategy reduces risk. Telcos gain some marketing platform advantages and some “rent rack space” revenue.

Greater levels of investment and risk come with business models “up the stack,” including operating an owned “edge computing facilities as a service” business. Another step up is to sell edge computing capabilities directly, not simply selling colocation space to third parties.

More intense models including some forms of system integration, owned business-to-business apps, and finally, direct sale of full retail apps to end users. Those sorts of efforts have been difficult for telcos in the past, but are not impossible. 

On the other hand, it is hard to see how edge computing becomes a significant revenue generator for telcos unless they participate at some level beyond selling rack space.

Will Telcos Eventually Move Beyond AWS Wavelength?

AWS partnerships with a number of telcos as part of Wavelength, one of several new AWS edge computing initiatives,  does inevitably raise the issue of how “partnerships” represent value and revenue for connectivity providers. 


AWS already has signed Verizon, Vodafone, KDDI and SK Telecom as access partners, and also appears to be leasing data center space from the carriers. So, at the very least, it appears each of the telco partners will earn some data center leasing revenue from AWS, which is placing its servers in telco racks (presumably often in central offices). 

Each telco probably also gains some dedicated access (capacity) revenue as well.


There is some potential indirect benefit, as often is the case, even from partnerships with little direct incremental revenue, as the availability of AWS edge computing services makes the connectivity service more valuable. 


The AWS partnership does provide near-term value in allowing each of the telco partners to tout their presence in the edge computing market, at low investment costs to them. 


On the other hand, it is hard to see much incremental revenue upside from this sort of partnership. As always, the biggest revenue and profit returns will flow to providers that own the infrastructure and the services, as always is the case for owner’s economics.


Still, when markets are young, revenue upside is likely small and investment costs high, that “creep in” at low cost strategy reduces risk. Telcos gain some marketing platform advantages and some “rent rack space” revenue.


Greater levels of investment and risk come with business models “up the stack,” including operating an owned “edge computing facilities as a service” business. Another step up is to sell edge computing capabilities directly, not simply selling colocation space to third parties.


More intense models including owned business-to-business apps, and finally, direct sale of full retail apps to end users. 


Wavelength limits telco risk, but also telco revenue and strategic upside. Eventually, some may try--or expand--other initiatives with more revenue and profit upside. Longer term, at least some of the connectivity providers may attempt to enter or grow other edge-related businesses. Both AT&T and Verizon own businesses in the auto communications area, for example. 


Still, AWS is making its edge computing strategy clearer, launching several initiatives, showing the competition access providers will face from others in the ecosystem.


AWS Wavelength embeds AWS compute and storage services within telecommunications provider data centers at the edge of the 5G networks.


AWS Local Zone extends edge computing service by placing  AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. 


AWS Local Zones are designed to run workloads that require single-digit millisecond latency, such as video rendering and graphics intensive, virtual desktop applications. Local Zones are intended for customers that do not want to operate their own on-premises or local data center.


Likewise, AWS Outposts puts AWS servers directly into an enterprise data center, creating yet another way AWS becomes a supplier of edge computing services. “AWS Outposts is designed for workloads that need to remain on-premises due to latency requirements, where customers want that workload to run seamlessly with the rest of their other workloads in AWS,” AWS says.  


AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’s broad array of services in the cloud.

AWS Launches Three Edge Computing Services

At least some tier-one connectivity providers will try to create edge computing as a service facilities and businesses. In that effort, they will possibly face competition from other potential competitors, including tower companies, the hyperscale computing giants, data centers and possibly new entrants as well.

AWS is making its edge computing strategy clearer, launching several initiatives. AWS Wavelength embeds AWS compute and storage services within telecommunications provider data centers at the edge of the 5G networks.

AWS Local Zone extends edge computing service by placing  AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. 

AWS Local Zones are designed to run workloads that require single-digit millisecond latency, such as video rendering and graphics intensive, virtual desktop applications. Local Zones are intended for customers that do not want to operate their own on-premises or local data center.

Likewise, AWS Outposts puts AWS servers directly into an enterprise data center, creating yet another way AWS becomes a supplier of edge computing services. “AWS Outposts is designed for workloads that need to remain on-premises due to latency requirements, where customers want that workload to run seamlessly with the rest of their other workloads in AWS,” AWS says.  

AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’s broad array of services in the cloud.

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...