Tuesday, November 3, 2020

How Much Can Telcos Cut Sales Costs?

Intangible products such as music, video, print content, banking transactions and even communications services are among those most easily sold “online” or “digitally,” displacing physical forms of distribution. 


Communications products also are intangible, so a logical question is how channels of distribution might change over time, with “sales” and “fulfillment” becoming more virtual and less physical. 

source: A.D. Little 


The issue is whether digital fulfillment allows connectivity providers to cut operating costs or capital investment.  


For consumer mobility services, the switch might be experienced as ordering a new phone online and then activating online, with no need to visit a physical retail outlet. Small business customers might find basic data and voice services could be ordered online as well. 


Eventually, even more complicated enterprise services might be sold without use of sales forces. That might seem fanciful, but consider the value of a human enterprise services sales force: expert knowledge of how to match network services with business process support.  


But consider the traditional value of human sales forces, which understand the complexity of network offers as well as the requirements for business process support, and can match needs with solutions. 


Even if we assume that every enterprise situation is custom, to a significant extent, there are patterns, which means rules can be created. 


And any rules-based process can be enhanced by use of artificial intelligence systems. That means, in principle, that the value of human experts should be capable of replication in an AI-enhanced sales and fulfillment process. 


source: A.D. Little 


If one assumes that connectivity providers must reduce operating and capital investment costs to maintain profit margins in slow-growth to no-growth markets, then reducing sales and customer care costs are among the areas where the biggest opportunities for savings might be found. 


Monday, November 2, 2020

Fixed Network Business Models Now Based on "Dumb Pipe"

Intangible products such as music, video, print content, banking transactions and even communications services are among those most easily sold “online” or “digitally.” Another way of describing the change in channels of distribution is to note that, over time, “sales” and “fulfillment” became more virtual and less physical. 


So the issue is the extent to which connectivity services sold to consumers and small businesses might also become more “virtual” over time. For consumer mobility services, the switch might be experienced as ordering a new phone online and then activating online, with no need to visit a physical retail outlet. 


source: A.D. Little 


That retail virtualization is perhaps a mirror of the content and applications virtualization that already has reshaped the connectivity business. 


“Over the top” applications and services are more than a revenue model, a strategy and an asset ownership model. They reflect fundamental changes in how computing and communications networks are designed and operated. 


In a broad sense, OTT represents the normal way any computing network operates, and since all telecom networks now are computer networks, there are clear business model implications. 


Though it is so familiar we hardly notice it any more, communications network architecture, computing and software architecture also mirror a profound change in possible communications, media and content industry business models. 


The separation of access from apps, transport from other functions now is a fundamental reality of communications, software design and applications. The whole idea is to compartmentalize and separate computing or communications functions so development can happen elsewhere without disrupting everything else. 


The desired outcome is the ability to use any app on any device on any network, while making changes and upgrades rapidly within each layer or function. Abstraction is another way of describing the architecture. Devices do not require detailed knowledge of what happens inside the network black box (which is where the notion of “cloud” came from). 


Devices only need to know the required interface. That also explains the prevalence of application programming interfaces, which likewise allow the use of abstracted functions. 


What we often forget is that these technology conventions have business model implications. Simply stated, the business model (all the inputs and operations needed to supply a product to a customer for a profit) mirrors the architecture of software and networks.

source: Henry Chesbrough 


Which is to say business models now are built on abstracted ecosystems and value chains. The clearest illustration of that is the phrase “over the top,” which describes the ability of any third party application or service provider to reach any customer or user on any standard internet connection.


That “open” process contrasts sharply with the old “closed” analog telco model where the only apps or devices that could be used on the network were owned or permitted by the connectivity services provider. 


That is why the terms “over the top” and “dumb pipe” have developed. Where in the past telcos sold services that used a network (voice, messaging, video entertainment), now they also sell “data network access,” where the product the customer buys is, strictly speaking, a “dumb pipe” that enables access to applications. 


The irony is that, to the extent the dumb pipe internet access is the foundatinal service now sold to fixed network consumers, and a core product for mobile network customers, revenue streams now are built on dumb pipe.


Keep in mind that all telecom networks now are computer networks. The value lies in enabling access to applications. Some of those apps are owned by the connectivity provider (public network voice, public network messaging, linear or OTT entertainment video, virtual private network services, private line, hosted voice and--in some cases--enterprise applications. 


But the dominant value of the dumb pipe internet access is access to all other third party applications based on delivery using the public internet. 


The great irony is that, as much as connectivity providers “hate” being dumb pipe providers, their business models now are based on it.


Friday, October 30, 2020

"Digital Transformation" Will be as Hard as Earlier Efforts at Change

New BCG research suggests that 70 percent of digital transformations fall short of their objectives. 


That would not surprise any of you familiar with the general success rate of major enterprise technology projects. From 2003 to 2012, only 6.4 percent of federal IT projects with $10 million or more in labor costs were successful, according to a study by Standish, noted by Brookings.

source: BCG 


IT project success rates range between 28 percent and 30 percent, Standish also notes. The World Bank has estimated that large-scale information and communication projects (each worth over U.S. $6 million) fail or partially fail at a rate of 71 percent. 


McKinsey says that big IT projects also often run over budget. Roughly half of all large IT projects—defined as those with initial price tags exceeding $15 million—run over budget. On average, large IT projects run 45 percent over budget and seven percent over time, while delivering 56 percent less value than predicted, McKinsey says. 


Significantly, 17 percent of IT projects go so bad that they can threaten the very existence of the company, according to McKinsey. 


The same sort of challenge exists whenever telecom firms try to move into adjacent roles within the internet or computing ecosystems. As with any proposed change, the odds of success drop as the number of successful approvals or activities increases.


The rule of thumb is that 70 percent of organizational change programs fail, in part or completely. 


There is a reason for that experience. Assume you propose some change that requires just two approvals to proceed, with the odds of approval at 50 percent for each step. The odds of getting “yes” decisions in a two-step process are about 25 percent (.5x.5=.25). In other words, if only two approvals are required to make any change, and the odds of success are 50-50 for each stage, the odds of success are one in four. 


source: John Troller 


The odds of success get longer for any change process that actually requires multiple approvals. Assume there are five sets of approvals. Assume your odds of success are high--about 66 percent--at each stage. In that case, your odds of success are about one in eight for any change that requires five key approvals (.66x.66x.66x.66x.66=82/243). 


The same sorts of issues occur when any telecom firm tries to move out of its core function within the ecosystem and tries to compete in an adjacent area. 


Consultants at Bain and Company argue that the odds of success are perhaps 35 percent when moving to an immediate adjacency, but drop to about 15 percent when two steps from the present position are required and to perhaps eight percent when a move of three steps is required.

source: Bain and Company


The common thread here is that any big organizational change, whether an IT project or a move into new roles within the ecosystem, is quite risky, even if necessary. The odds of success are low, for any complex change, no matter how vital.


Why 4G Sometimes is Faster than 5G

As always, the amount of spectrum available to any mobile service provider correlates with potential data throughput. As AT&T, for example, has rolled out 5G service, it has relied on low-band assets initially.


And no amount of fancy signal processing is going to compensate for the amount of spectrum available to support 5G, compared to 4G, for example. If you look at the total amount of spectrum available to support AT&T’s 5G coverage, you can see that 4G spectrum is more capacious than 5G. 


source: PCmag 


That means AT&T’s 5G network--for the moment--offers less speed than the 4G network. That will change over time, and likely quite substantially. 


Over the last decade, average (or perhaps typical) mobile data speeds have grown logarithmically, according to data compiled by PCmag. I cannot tell you whether the graph shows median or mean speeds, but the point is that, assuming the same methodology is used for all data, the logarithmic trend would still hold. 

 

source: PCmag 


There is no reason to believe 5G will fail--over time--to continue the logarithmic trend, with the release of huge amounts of new spectrum, expanded use of spectrum sharing and spectrum re-use, plus small cell access.


Wednesday, October 28, 2020

Need for Global Scale Will Limit Telco IoT, Edge Computing Success

Among other reasons, lack of global scale is likely to prevent most telcos or mobile operators from becoming leading providers of internet of things or edge computing solutions or platforms. Generally, scale economics work against most telcos, no matter how large. 


That is not to say large telcos cannot significantly diversify revenue streams. AT&T has managed to shift its revenue sources enough that perhaps 43 percent of total revenue comes from something other than connectivity services. Softbank (at least until recently) had managed to generate perhaps 33 percent of total revenue from non-connectivity sources, while KT had reached about the same level. 


source: GSMA 


Many other tier-one telcos have managed to add between 10 percent and 25 percent of total revenue from sources other than connectivity. The need for scale seems to apply for those operations as much as it matters for the core connectivity business. But there are issues beyond scale. 


To be sure, new services such as the internet of things and edge computing will make some contribution to service provider revenues. Still, most of the value and revenue from IoT will be created elsewhere in the value chain (semiconductors, devices, platforms, integration, application software), not in connectivity. 


Perhaps edge computing will show the same dynamics, as edge computing still is about computing. That means the leading suppliers of computing--especially cloud computing--have a reasonable chance of emerging as the leading suppliers of workload as a service at the edge. 


Simply, if it is logical to purchase compute cycles from a major cloud or premises computing supplier, it will likely make just as much sense to purchase edge compute the same way. 


In other words, customers tend to have clear preferences about the logical suppliers of various products, beyond scale. The phrase “best of breed” captures the thinking. If an enterprise or other entity is looking at premises computing, it looks to certain brands. If a company is looking for cloud computing, it looks to other brands. 


Almost never is a telco among the logical five potential choices for buying compute cycles or computing platforms. 


That noted, tier-one telcos have made important strides diversifying beyond core connectivity. Among the issues are the extent to which that can happen in the edge computing or IoT realms.


BT to Build Private 5G Network for Belfast Harbor

BT says it is building and will operate a private 5G network on behalf of Belfast Harbor, covering large parts of the 2,000-acre site in 2021. BT says it aims to build “a state-of-the-art 5G ecosystem within the Port.”


Aside from supporting mobile phone service, the private network will enable remote controlled inspection technology (presumably use of drones), reducing the need for workers to climb towers. The network also will support air quality sensors. 


One can guess from those two examples--and BT’s talk of developing an ecosystem, that most of the expected smart harbor applications have not yet been deployed or developed, or perhaps have not yet been adapted to work on the 5G private network. 


Joe O’Neill, Belfast Harbor chief executive says the network is intended to support accurate tracking and integration of data gathered from multiple sources, and expects the new network to help it capture, process and interpret data in real time.


Tuesday, October 27, 2020

It's Hard to Win a Zero-Sum Game

Zero-sum games are hard to win, in part because every winner is balanced by a loser. Many mature mobile communications markets are largely zero-sum games these days. Market share, by definition, means one supplier gains exactly what another supplier loses. 


That is not the case for new, emerging or growing markets, where virtually all contestants can, in theory, gain while nobody loses. 


The substitution of machines for human labor is something of a zero-sum game as well.
The notion of tradeoffs is key for zero-sum markets. Consider minimum wage laws or unionization of employees. The issue is not whether those things are good or bad, but simply the tradeoffs that are made. 


Higher minimum wage laws. produce higher wages for a smaller number of employees, in part because higher wage minimums increase the attractiveness of substituting machines for human labor. 


Higher union membership and bargaining power tends to produce higher wages for union members, but often at the cost of the number of people who are employed at unionized businesses. 


The other trend we see is that when forced to make a choice, unions tend to prefer saving a smaller number of jobs in return for gaining higher wages. Workers with less seniority normally are sacrificed in such deals. 


We can disagree about whether Uber and Lyft drivers are independent contractors or employees. But it is not hard to argue that if employee classification leads to higher minimum wages, it also will lead to fewer Uber and Lyft drivers able to work. 


We can make any choices we want about which outcome we prefer: more work for more people or higher wages for fewer workers. But the choices will inevitably be made. It’s a zero-sum game.


As more and more telecom markets reach saturation, zero-sum outcomes will appear in market share statistics or the number of 4G phone account subscribers versus 5G subscribers.


Mobile operators can bend the curves a bit by changing value propositions, adding new features and bundling devices and features (up to a point) to encourage customers to switch to more-expensive plans, when they come up with compelling offers. But all of that occurs within a business that is largely a zero-sum game in many markets.


"When I Use a Word, it Means just What I Choose it to Mean"

Telecom terminology changes from time to time. These days, a “core network” for a private 4G or 5G network requires software we formerly associated with a mobile network core, such as base station control functions, routing, synchronization, timing and so forth.

These days “voice” often refers to the interface people use to interact with their phones, smart speakers or car communication systems, rather than the older notion of voice phone calls. 

Broadband used to be defined as any data rate of 1.544 Mbps or higher. These days it is some higher number that we adjust periodically. 

“Mobility” used to refer to use of mobile phones and cellular networks. These days it often refers to ride sharing. 

“Over the top” has been used in the past to describe video entertainment, messaging or voice applications provided by third parties and accessed by users and customers over any internet connection. Today it might more properly describe any service or application accessed over a communications network that is not owned by the supplier of access services.

“When I use a word, ‘it means just what I choose it to mean” the Lewis Carroll character Humpty Dumpty says. That’s an exaggeration as applied to use of terms in telecom, but the general drift is correct. 

Wednesday, October 21, 2020

2020 was Tough for Mobile Subscriptions, Better for Fixed Network Internet Access

With the caveat that usage is not identical to revenue earned from that usage, 2020 has generally not been a favorable year for mobile operator subscription growth, with a couple of exceptions, according to the Economist Information Unit. 


Fixed network internet access has held up better in most markets, with the strongest growth in the Middle East and Africa. 

source: Economist Information Unit 


Regions that saw the strongest fixed network subscription growth will see lower rates in 2021, while mobile subscription growth will improve in virtually every region in 2021.


Friday, October 16, 2020

Brownouts are an Issue, But Might be Almost Unavoidable

Brownouts tend to be a typical feature of most networks using internet protocol.  Where most measures of availability (reliability, we sometimes call it) measure times or percentages of times when a resource is unavailable to use, brownouts represent the times or percentage of times when a network or resource does not operate at designed levels of availability.


Just as an electrical brownout implies a severe drop in voltage but might not be an outage, a network brownout follows a sharp degradation in link quality but might result in the affected circuits still being technically “up,” Oracle says. “This decline may be triggered by congestion across the network or a problem on the service provider’s end.”


Brownouts are in one sense “a feature not a bug,” a deliberate design choice that prioritizes resiliency over guaranteed throughput. That is the whole architectural principle behind internet protocol, which sacrifices routing control and quality of service on defined routes in favor of resiliency gained by allowing packets to travel any available route. 


And since the availability of any complex system is the combined performance of all cumulative potential element failures, it should not come as a surprise that a complete end-to-end consumer user experience is not “five nines,” though enterprise networks with more control of transport networks and end points might be able to replicate five nines levels of performance. 


The theoretical availability of any network  is computed as 100 percent minus the product of the component failure rates (100 percent minus availability). For example, if a system uses just two independent components, each with an availability of 99.9 percent, the resulting system availability is less than 99.8 percent. 


Component

Availability

Web

85%

Application

90%

Database

99.9%

DNS

98%

Firewall

85%

Switch

99%

Data Center

99.99%

ISP

95%

source: IP Carrier 


Consider a 24×7 e-commerce site with lots of single points of failure. Note that no single part of the whole delivery chain has availability of  more than 99.99 percent, and some portions have availability as low as 85 percent.


The expected availability of the site would be 85%*90%*99.9%*98%*85%*99%*99.99%*95%, or  59.87 percent. Keep in mind that we also have to factor in device availability, operating system availability, electrical power availability and premises router availability. 


In choosing “best effort” over “quality of service,” network architects opt for “robustness” over “reliability.” 


Source: Digital Daniels

Building Something from "Nothing"

“You can only build something from nothing with a private equity mindset,” says Matthias Fackler, EQT Partners head of infrastructure Continental Europe. It’s an interesting phrase. In the case of connectivity assets, it might imply a view that infrastructure--in some cases--is worth "nothing" or very little.


The statement also illustrates two key issues in the connectivity business: low revenue growth and low profitability.


source: STL Partners


So almost by definition, if private equity firms are active in an industry, it means there are financial stresses. 


Private equity is about the buying of public assets, taking them private and then selling, typically when a public company asset is deemed to be underperforming. Quite often, the goal is to sell the assets within five years. That virtually always means that long-lived investments such as capital investment in networks are avoided, with the emphasis on operational restructuring. 


Public companies tend to “buy to keep.” Private equity always “buys to sell.” In other words, private equity acts as a turn-around specialist. They arguably excel when able to identify the one or two critical strategic levers that drive improved performance. 


They have a relentless focus on enhancing revenue, operating margins, and cash flow, plus the ability--as private entities--to make big decisions fast. That might be a greater challenge than is typical as a result of the Covid-19 pandemic, which is depressing connectivity provider revenues and profit margins.  



Thursday, October 15, 2020

NVIDIA Maxine: AI and Neural Network Assisted Conferencing

Moore's Law Shows Up in iPhone, Nvidia Video Conferencing SDKs

Moore’s Law continues to be responsible for extraordinary advances in computational power and equally-important declines in price. Apple’s new iPhone uses lidar that used to cost $75,000.


Separately, researchers at Nvidia now have Maxine, a software development kit for developers of video conferencing services that uses artificial intelligence and a neural network to reduce video bandwidth usage to one tenth of H.264. Nvidia expects Maxine also will dramatically reduce costs. 


Maxine includes application programming interfaces for the face alignment, gaze correction, face re-lighting and real time translation in addition to capabilities such as super-resolution, noise removal, closed captioning and virtual assistants, Nvidia says. 

These capabilities are fully accelerated on NVIDIA GPUs to run in real time video streaming applications in the cloud.

Maxine-based applications let service providers offer the same features to every user on any device, including computers, tablets, and phones, Nvidia says.


NVIDIA Expects to Use AI to Slash Video Conference Bandwidth



Researchers at Nvidia have demonstrated the ability to reduce video conference bandwidth by an order of magnitude. In one example, the required data rate fell from 97.28 bB/frame to a 0.1165 kB/frame – a reduction to 0.1 percent of required bandwidth.

FCC Will Clarify Section 230

Section 230 of the Communications Decency Act of 1996 was intended to promote free expression of ideas by limiting platform exposure to a range of laws that apply to other publishers.


In principle, the Act provided a safe haven for websites and platforms that wanted to provide a platform for controversial or political speech and a legal environment favorable to free expression. It has not apparently worked out that way, as there is growing concern that platforms are acting to suppress free speech. 


Section 230 says that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  


In other words, platforms that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. Ironically, a law intended to promote freedom of speech now is viewed by many as enabling the suppression of free speech. 


“Members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230 of the Communications Act,” says Federal Communications Commission Chairman Ajit Pai. “There is bipartisan support in Congress to reform the law.”


The Federal Communications Commission’s general counsel says the FCC has the legal authority to interpret Section 230 of the Communications Act of 1996. “Consistent with this advice, I intend to move forward with a rule making to clarify its meaning,” says Federal Communications Commission Chairman Ajit Pai. 


“Social media companies have a First Amendment right to free speech,” says “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.” That 


The U.S. Department of Commerce has petitioned the Commission to “clarify ambiguities in section 230.” Earlier this week, U.S. Supreme Court Justice Clarence Thomas pointed out that courts have relied upon “policy and purpose arguments to grant sweeping protections to Internet platforms’ that appear to go far beyond the actual text of the provision.“


Many believe that clarification process is likely to remove “overly broad” interpretation that in some cases shields social media companies from consumer protection laws.


It is perhaps an unfortunate development, to the extent that the antidote to limited free speech would preferably be  “more speech by more speakers,” as the corrective to market monopoly is “more competition.”


On the Use and Misuse of Principles, Theorems and Concepts

When financial commentators compile lists of "potential black swans," they misunderstand the concept. As explained by Taleb Nasim ...