Sunday, October 7, 2018

SDN and NFV are Different, But Telcos Will Use Both

Even if the terms are used interchangeably, network functions virtualization (NFV) and software defined networking (SDN) arguably are different. But both seem to be part of the broader push towards core networks that are virtualized and easily programmable. So SD-WANs reflects an SDN approach, while use of white box network elements represents NFV. Telcos will do both.

At a high level, one might argue that the business outcome for network functions virtualization (NFV) is lower cost networks. One might also argue that the business outcome for SDN, while contributing to lower cost, is greater network control.

NFV  is the process of moving services such as load balancing, firewalls and intrusion prevention systems away from dedicated hardware into a virtualized environment, many would agree. Others might note that NFV makes use of “virtual machines” to supply network functions.

And, as with many innovations, the initial drive to reduce cost eventually leads to new thinking about use cases, revenue streams and service creation. On the other hand, SD-WANs are a prime example of the opposite trend: a new service offering requiring separation of control plane and data plane.


And many would say one core attribute of NFV is the ability to separate control plane functions from data plane operations, so software can run on commodity hardware. But that is a core principle of SDN as well.  

Both are based on network abstraction. But SDN and NFV differ in how they separate functions and abstract resources.

SDN abstracts physical networking resources--switches, routers and other network elements--and moves decision making to a virtual network control plane. In this approach, the control plane decides where to send traffic, while the hardware continues to direct and handle the traffic.

NFV aims to virtualize all physical network resources beneath a hypervisor, which allows the network to grow without the addition of more devices.

In other words, SDN separates network control functions from network forwarding functions.  NFV abstracts network forwarding from the hardware on which it runs. You might also argue that NFV provides basic networking functions, while SDN controls and orchestrates them for specific uses.

So what makes that different from a software defined network (SDN)? That often is hardly to explain. SDN aims to automate processes, while NFV often aims only to virtualize them.  Software defined networking (SDN) is an approach to using open protocols, such as OpenFlow, to apply globally aware software control at the edges of the network to access network switches and routers that typically would use closed and proprietary firmware, some would say.

In principle, then, an entity can deploy SDN without NFV, or NFV without SDN.  

As a practical matter, NFV often ie easiest to understand as a way of separating software and controller functions from dedicated network elements. By implementing network functions in software that can run on a range of industry standard servers hardware, NFV aims to reduce cost and make networks more flexible.

SDN, on the other hand, seeks to create a network that is centrally managed and programmable. In other words, SDN separates lower-level packet forwarding from higher-level network control.

Saturday, October 6, 2018

Service Providers Embrace Open Source

The global telecom industry has come a long ways from the days of proprietary platforms, solutions and network elements. Consider the heavy service provider membership in the Telecom Infra Project, an effort to develop open source platforms for communications and mobile networks.


A survey of 150 mostly-technical professionals from 100 companies finds 73 percent are “extremely” or “mostly” confident that open networking solutions can achieve the same level of performance as traditional networking solutions.

Some 59 percent of respondents say they currently are using open networking solutions and some 84 percent of those that are not, plan to do so within the next three years.

Still, technology immaturity is the biggest concern for 46 percent of respondents. The next closest concern, at 23 percent, was performance itself.

As you would expect, cost savings are the driver. Some 75 percent of respondents say cost savings are the expected outcome of deploying open networking solutions.

The survey of 150 networking professionals from 100 communications service providers globally, included 48 percent who  work for converged service providers owning both fixed and mobile networks.

Mobile service providers were 25 percent of the sample, while wireline, cable and satellite operators made up 22 percent. Some 57 percent are in North America; 17 percent from Europe and 14 percent from Asia.

Respondents from Central/South America comprised eight percent of the sample, and those from Middle East/ Africa represented four percent.

Survey respondents worked in technical roles. Nearly 25 percent work in engineering and 20 percent say they work in network design and planning. Some 19 percent work in network operations and 11 percent in research and development.

Friday, October 5, 2018

Where Could Blockchain Add Value in Communications or Media?

Will disintermediation be one of the ways blockchain ultimately has value in the “technology, media and telecom” (TMT) industry? Possibly. Disintermediation is the process of removing distributors from any supply chain. Think “over the top” and you get the concept. So anything that promises disintermediation could have big consequences in the TMT space.

In the case of blockchain, that disintermediation could be a positive, not a negative, for content owners or distributors, though. Think about the problem of authenticating users and subscribers; participants in any social media transaction or in any highly-distributed access services environment.

Consider the case of a mobile services provider that amalgamates access to multiple networks, including assets secured from two or more other underlying service providers. Think of Google Fi, which uses Wi-Fi, Sprint and T-Mobile US networks. In some future scenario, perhaps blockchain is used to authenticate users for access to each of the participating networks.

To be sure, there are other ways of doing so. The issue is whether blockchain might be easier or cheaper, eventually, perhaps for cross-border (international roaming) transactions, for example. International settlements always are seen as a value of blockchain, in terms of taking cost out of such transactions.

The idea is that blockchain could have value whenever databases must be kept or transactions completed. Communications and content arguably have lots of places where those two things happen.

Blockchain is a technology of more than average potential usefulness in the “technology, media and telecom” industry (or industries; it is hard to say which is the more-apt description), according to consultants at McKinsey. In fact, in most industries, blockchain might have both low feasibility and relatively-modest impact, the consultants say.

Essentially, blockchain offers the hope of “perfect audit history,” without fraud. That obviously has implications for the financial industry, or any situation where “trust” is essential. And since “money” is always based on trust, that matters.

But trust has become a bigger issue for social media and advertising as well, which is likely why blockchain could have relevance in the TMT space. Though blockchain is not foolproof, it arguably is more hardy than most other ways of using databases, as fraud generally requires a wide level of willingness to commit fraud (something over half of all connected computers are in on the attempt, McKinsey essentially argues).

Nor can blockchain check on the integrity of data that is input into the database. “All that the blockchain itself does is ensure the integrity of the individuals making the transaction, ensuring that you have the right combination of a public and private key,” McKinsey analysts note.




Blockchain: One View of What it Is, and Is Not

Blockchain is one of those concepts one hears about all the time (artificial intelligence and machine learning also), is likely destined to be important in the communications industry, but in ways that are not always intuitive, or necessarily visible to most practitioners.

It is rather akin to "electricity,"  "computing" or "cloud computing" or "open source" in that sense. 




Thursday, October 4, 2018

AT&T Builds 5G on 4G

It often is said that 5G builds on 4G, and that is correct. Consider AT&T, which is boosting 4G speeds as it launches 5G markets. AT&T plans to bring mobile 5G to 12 cities in 2018, reaching at least 19 cities in early 2019.

AT&T also has announced 99 new 5G Evolution markets, bringing the total number of such markets with these technologies to 239. 5G Evolution markets are locations where peak theoretical wireless speeds for capable devices are at least 400 megabits per second.

AT&T says 5G Evolution will be available  in over 400 markets by the end of 2018. In the first half of 2019 AT&T plans to offer nationwide coverage, making 5G Evolution available to over 200 million people.

The other technology AT&T is deploying is LTE-LAA, which boosts peak theoretical wireless speed for capable devices to a gigabit per second. LTE-LAA is now live in parts of 20 cities with plans to reach at least 24 cities in 2018.

In terms of devices, AT&T offers 13 devices capable of accessing both 5G Evolution and LTE-LAA network technologies. The devices include: LG V30 and LG V35 ThinQ, Motorola Z2 Force Edition, Netgear Nighthawk Mobile Router, Samsung Galaxy S8 and Galaxy S9 series devices and others.

5G_map_cities.jpg

Saturday, September 29, 2018

Could Edge Computing Change Smartphone Design and Cost?

Edge computing is almost always touted as a necessity in the 5G era to support ultra-low-latency services, the typical examples being support for autonomous vehicles, remote surgery or even prosaic requirements such as supporting channel changes on video screens supporting ultra-high-definition TV (4k, 8K, virtual reality).

But are there are other possibilities? Consider the advent of the Chromebook, a “PC” that essentially conducts all computing activities at a remote cloud data center. The advantage is lower-cost customer premises equipment (CPE).

Sure, one needs a screen, power supply, keyboard and some amount of on-board memory and processing. But not so much. It often is said, with a good measure of truth, that a Chromebook is a device supporting a browser, and not much more.

So can edge computing support a similar approach to the design of smartphones, essentially creating a device that resembles earlier efforts to create network-centric computing devices? Maybe, some think.

Could edge computing create new opportunities for access providers supplying phone services? AT&T believes that could happen.

AT&T plans to build thousands of small edge computing data centers in central offices and other locations across the United States. So could a big edge computing network affect mobile phone design as much as cloud computing has affected the design and use of computing devices? AT&T’s Mazin Gilbert, VP, thinks that is a possibility.

Edge computing could create the conditions for really cheap smartphones. “Can my $1,000 mobile phone be $10 or $20 dollars, where all the intelligence is really sitting at the edge?,” Gilbert asks. “It’s absolutely possible.”

That obviously would dramatically reduce barriers to smartphone use by everyone, while providing some means of differentiation for access services provided by AT&T. Both trends would provide more reasons for consumers or businesses to use the AT&T network, instead of rival networks.

It has been decades since tier-one telcos actually had a significant role in customer premises equipment business. Back in the monopoly era, telcos actually made and sold the phone devices people used. In fact, it was illegal to use any phone not manufactured by the service provider.

In the competitive era, service providers have been irrelevant as suppliers of CPE, as that role was ceded to device suppliers active in the consumer electronics space.

Edge computing could change those assumptions. Perhaps a firm such as AT&T licenses the building of cheap smartphones that rely extensively on edge computing and are designed to work on AT&T’s network.

As always, that approach will start out as a “useful for many people” but not a “full and complete substitute” for standard smartphones able to work globally. But not every customer requires global roaming. For most customers, coverage most places in the United States will work.

As any Chromebook user will attest, the “connect to Wi-Fi or you cannot do too much” approach is not perfect. You cannot “compute” anywhere (except to conduct offline transactions or activities). But it works, especially if one has the ability to tether to a smartphone.

Something like that could be possible once edge computing is fully built out.

U.S. Device Adoption is Near Saturation

Use of communications-dependent devices obviously has direct implications for communications service demand. So it matters that U.S. consumers now are reaching--or already have reached--saturation levels of device use.

Not to belabor the point, but device and account saturation strongly suggests that demand for new services and apps has to be created, beyond current levels of functionality for devices and connections.

That is one reason why many believe 5G is going to be different than all prior generations of mobile platforms. It will be the first platform where brand-new value, and therefore new revenue opportunities, will be created by enterprises. Consumer demand for phone functions and connecting other devices is fairly well saturated.

source: Pew Research Center

Thursday, September 27, 2018

Why Nobody Releases Gigabit Take Rates, Yet

Not one U.S. internet service provider publicizes the take rates it gets for gigabit internet access. Historically, no ISPs have done so for their fastest tiers of service, either. The reason, as you might suspect, is that it is highly likely take rates for such tiers of service are rather modest, and tend to be purchased by businesses rather than consumers.

Eventually that could change, but only when purchases of gigabit access service is the mid-tier offer.

Back in the days when cable TV operators first were rolling out consumer Internet access at speeds of 100 Mbps, it was virtually impossible to get subscriber numbers from any of the providers, largely because take rates were low.

In the United Kingdom, then planning on upgrading consumer Internet access speeds to “superfast” 30 Mbps, officials complained about low demand. In fact, demand for 40 Mbps was less than expected.

So “gigabit” internet access remains mostly a marketing platform, not an indicator of what services people actually buy, when they have access to gigabit services.

Value versus price is the likely reason for consumer behavior. “Value (performance versus price)” seems to be evaluated as best in the mid ranges of internet access service, not the “fastest” grades of service. Nor is that an unusual situation for most product categories.

In Australia, in 2016, for example, perhaps 15 percent of consumers purchased the then-fastest speed tier of 100 Mbps. Some 47 percent bought the mid-range service at 25 Mbps. Some 33 percent of buyers were content with service at the slowest speed of 12 Mbps.

Likewise, even where fiber-to-home connections are available, that does not mean most consumers will buy such service, if other options also are available. Data from New Zealand suggests take rates might be 33 percent where FTTH is sold.

Price has much to do with those choices, as do perceptions of value. The safest assumption is that multi-user households are most likely to buy faster tiers of service, reasoning that the connection bandwidth has to be shared by all members of the household.

Also, since there always is a direct relationship between purchases of internet access generally with higher incomes, we should not be surprised if cost-conscious consumers opt for less-expensive packages, while higher-income consumers are most likely to buy the most-expensive packages, which also are the fastest.

The takeaway is that most consumers buy the mid-tier offers. According to Federal Communications Commission data, in 2015 the most popular advertised speed plans purchased by consumers tend to range about 100 Mbps for cable providers.

AT&T U-verse plans generally were in the 45 Mbps range in 2015, while DSL speeds (all-copper access) were quite low, in comparison. Verizon FiOS speeds were generally in the 80-Mbps range.

Over time, as speeds increase, consumers have tended to keep upgrading. But they have generally tended to buy the mid-tier services. That is what AT&T has found as it increases the top speeds available.  CenturyLink also found that to be the case.  

In 2010, for example, about 40 percent of U.S. consumers were buying Internet access at about 6 Mbps. You might wonder why, but the answer is simple. In 2010, the 6-Mbps service offered what consumers then considered the best value for the money paid.

Wednesday, September 26, 2018

U.S. Internet Access is Not "Expensive"

One always can get a good argument about whether internet access markets in the United States are getting less competitive or more competitive. What often gets lost in such discussions are facts. Everyone is entitled to an opinion; but not their own facts.

And there are several ways to look at internet access services, in the United States or anywhere else. For starters, there is a difference between mobile access and fixed network access. Most studies of internet access globally tend to focus on “fixed network” access, even when, in many markets, most people only use mobile internet access.

Availability is one important metric: can consumers buy service? Take rates are a different matter. Even where available, not every consumer wants to buy a fixed network service. Nor do consumers tend to buy the fastest service available. Instead, they compare value with price, and almost always buy services that are “good enough,” and neither the fastest nor slowest options available.

Speeds also vary from country to country, and within countries (urban, rural), and by provider (telco, cable, satellite, fixed wireless, mobile). We always can argue about what speeds are “good enough.”


Finally, there is the matter of price. Many only look at price in absolute terms, not relative terms. In other words, they look at total price, not the price as a percentage of buyer income. That matters whenever one is making international comparisons.



To state what should always be obvious, prices are higher in more developed economies, and that applies to internet access prices as well. Consider mobile broadband.

Fixed network internet access prices in developed nations--measured as a percentage of gross national income--are quite low, less than one percent of gross national income per capita.

Prices for mobile internet access, as a percentage of gross national income are even lower. The point is that U.S. internet access prices, as a percentage of household income or per-capita gross domestic product, are quite low, by global standards.

In other words, U.S. internet access is not expensive.



Tuesday, September 25, 2018

Cable TV Operators Gradually Start to Compete with Each Other

Historically, cable TV companies do not compete directly with each other in the same geographic areas. That is changing a bit, though. In the United Kingdom, if Comcast completes its purchase of Sky, Sky and Liberty Global (Virgin) will compete head to head, for the first time in the U.K. market.

That is something that has happened in telecom markets, both mobile and fixed, and some have wondered how long it would be until cable companies began to compete in such a manner as well. We appear to be one step closer, in the U.K. market.

In the U.S. market, such head-to-head competition is more likely to come as cable TV companies get into the mobility business, as has been the case for U.S. telcos generally. Even when firms such as AT&T, Verizon and CenturyLink mostly have not competed against each other in the fixed network area, there has been no way to limit competition when mobile networks operate ubiquitously across the country.

That means AT&T and Verizon, for example, were early on forced to compete against each other nationwide, in the mobile arena. In the fixed networks area, they have not competed in the same territories.

That now is changing as Verizon plans a 5G fixed wireless attack in AT&T areas (out of region). But Liberty Global and Comcast now will face each other as direct competitors in the U.K. market as well. That is new.

Revenue Upside and Cost Reduction Will Drive Networks towards Edge Computing

There are three major reasons why edge computing is going to reshape networking architectures: revenue, cost and functionality.

On an internal level, network cost and functionality are shifting towards use of edge computing to support access networks in the 5G era. For starters, centralizing radio processing further into the network reduces radio cell site costs, in addition to improving flexibility.

On the revenue side, core networks will evolve towards edge computing to reduce latency, a primary requirement for creating new applications that require one-millisecond or just a few milliseconds latency.
source: Nokia

Sunday, September 23, 2018

Disintermediation in the Subsea Business

“Disintermediation” is a term some attendees at the PTC Academy event in Bangkok, Sept. 20 and 21, 2018, heard for the first time. The term simply means that product and service providers go direct to end users and customers, rather than using distributors.

Since communications service providers are distributors, that has key implications. Think “over the top” and you get the point: apps go direct to customers and end users with no direct business relationship between the app/platform and the user.

To an astonishing degree, market demand for wide area communications has shifted away from telcos and to application and platform providers.

The amount of undersea traffic carried by the largest U.S. application and platform providers grew to 339 Tbps between 2013 and 2017. International capacity supplied by internet transport companies grew to 350 Tbps.

“15 years ago, 100 percent of my clients were telcos,”  said Sean Bergin, APTelecom president. “Now 80 percent of my customers are OTTs,”


So platform and app companies Google, Facebook, Microsoft and Amazon do not yet move more bits than service providers do, but arguably will do so in the future. And that “function substitution” has happened in telecommunications before.

Though you are familiar with mobile substitution--the use of mobile networks to displace use of fixed networks--the substitution happening elsewhere is “over the top” substitution for carrier services and value.

In the undersea and wide area network business, that means enterprises of a particular type (tier-one application and platform suppliers) are creating and owning their own transmission networks, and no longer buying capacity from transport providers. And that also means disintermediation of the communications service provider.


Put another way, wide area networks now are experiencing product substitution, as did fixed network service providers, where mobile services are preferred to fixed services. As "over the top" apps, platforms and services often displace carrier services and apps, so enterprises (app, platform, device providers) increasingly have found it makes sense to own their own global networks. 


And that means the demand for capacity services from "public" networks (telcos) is diminished. In other words, as bandwidth demand grows, the amount of growth available as "revenue for service providers" diminishes. 


That trend can be seen clearly in the growth of transoceanic capacity that is supplied directly and internally by app and platform providers directly, on their own private networks. 

In other words, OTT now covers a much-wider range of business cases, all based on disintermediation, where producers go straight to their customers or users, without relying on distribution partners. 


Intel Follows Pattern: Replace 1/2 of Current Revenue Sources Every Decade

One rule of thumb I use when looking at business model change is to assume that a tier-one service provider will have to replace half its current revenue with new sources every decade. And that might be a reasonable rule for suppliers of apps, platforms, devices and components as well.

Im 2012, for example, Intel earned nearly 70 percent of revenue from “PC and mobile” platforms. By 2018, PC/mobile had dropped to about half of total revenue. By 2023 or so, Intel should generate 60 percent or more of total revenue from sources other than PC/mobile.


If you hear executives talking so much about innovation and new services, that is why: companies need to replace half their revenue every decade, and do so in every decade, from now on.


The good news is that, as tough as that sounds, firms have shown they can do so.

Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...