Wednesday, July 7, 2021

B2B Sales Journeys Now Begin Online

No matter how long a business-to-business sales process takes, perhaps 60 percent to  70 percent of potential business product buyers have conducted research on solutions to their problems before they contact a sales professional.  


In other words, “B2B customers today progress more than 70 percent of the way through the decision-making process before ever engaging a sales representative,” according to some studies.


The B2B sales journey in 2020 began online, using search about 90 percent of the time, according to Forrester Research. Search is involved 90 percent of the time, according to Google.  


In 2020, 80 percent of the average B2B buyer journey took place online, and whatever statistics you choose to believe, online research is a huge part of the buyer journey.  


Though intensified by the pandemic, 96 percent of B2B sales teams have shifted to remote selling, according to McKinsey.


That includes checking a supplier’s website, doing online searches and reading user reviews, according to Marketing Charts. Some 94 percent of all potential business customers conduct online research before making any purchases valued at more than $100,000, for example.  


source: Marketing Charts 


Accenture notes that business buyers also increasingly expect a “consumer online” experience when buying B2B. In 2014, for example, about 40 percent of prospects conduct online research for a majority of goods priced under $10,000, and 31 percent do so at that frequency for goods costing at least $100,000.  


More importantly, perhaps, many B2B transactions take place online as well. 


source: Marketing Charts 


That remained true in 2020. More than half of prospects reported found solutions online. And 70 percent of buyers defined their solutions before they ever contacted a sales professional. 


source: Demand Base 


Work from Home Productivity Less Clear than Many Believe

Work-from-home productivity often is thought to be as good, if not better, than in-office productivity, at least in the short term, and at least for knowledge workers. It is less clear whether productivity for many types of government services has improved. 


source: McKinsey 


It likely is much more complicated and nuanced than that. Business owners, for example, do not share the belief, though employees like it. But such attitudes are not the same as productivity gains. 


And there is some evidence that work from home productivity actually is lower than in-office work.  


According to Microsoft, “when we compare collaboration trends in Microsoft 365 between February 2020 and February 2021:

  • Time spent in Microsoft Teams meetings has more than doubled (2.5X) globally and, aside from a holiday dip in December, continues to climb.

  • The average meeting is 10 minutes longer, increasing from 35 to 45 minutes.

  • The average Teams user is sending 45 percent more chats per week and 42 percent more chats per person after hours, with chats per week still on the rise.

  • The number of emails delivered to commercial and education customers in February, when compared to the same month last year, is up by 40.6 billion.¹

  • And we’ve seen a 66 percent increase in the number of people working on documents.”


So people spend much more time in meetings than they did when in the office; more time with email and chat; more time working on documents. 


In principle--and assuming one can measure it--productivity increases as output is boosted using the same or fewer inputs. An initiative in Iceland, which has notable low productivity, suggests that service productivity by units of government does not suffer when working hours are reduced. 


To be sure, service sector productivity is devilishly hard to measure, if it can be measured at all. It is hard to measure intangibles. And there is some evidence that satisfaction with public sector services is lower than private services, and substantially lower for many types of government services. 


Productivity is measured in terms of producer efficiency or effectiveness, not buyer or user perception of value. But it is hard to argue that the low perceived quality of government services is unrelated to “productivity.” 


source: McKinsey 


And what can be measured might not be very significant. Non-manufacturing productivity, for example, can be quite low, in comparison to manufacturing levels. 


And there are substantial differences between “services” delivered by private firms--such as airline travel or communications-- and those delivered by government, such as education, or government itself. 

 

The study argues that reductions in work hours per week of up to 12.5 percent had no negative impact on productivity. Methodology always matters, though. 


The studies relied on group interviews--and therefore user self reports--as well as some quantitative inputs such as use of overtime. There is some evidence of how productivity (output) remained the same as hours worked were reduced. 


For public service agencies, shorter working time “maintained or increased productivity and service provision,” the report argues. 


There is perhaps ambiguous quantitative evidence in the report of what was measured or how it was measured. The report says “shifts started slightly later and/or ended earlier.” To the extent that productivity (output) in any services context is affected directly by availability, the key would be the ability to maintain public-facing availability. The report suggests this happened. 


But the report says “offices with regular opening hours closed earlier.” Some might question whether this represents the “same” productivity. Likewise, “in a police station, hours for investigative officers were shortened every other week.” Again, these arguably are input measures, not output measures. 


So long as the defined output levels were maintained, the argument can be made that productivity did not drop, or might formally have increased (same output, fewer inputs). In principle, at least over the short term, it should be possible to maintain public-facing output while reducing working hours. Whether that is sustainable long term might be a different question. 


The report says organizations shortened meetings, cut out unnecessary tasks, and reorganized shift arrangements to maintain expected service levels. Some measures studied were the number of open cases, the percentage of answered calls or the number of invoices entered into the accounting system. 


In other cases the test seemed to have no impact on matters such as traffic tickets issued, marriage and birth licenses processed, call waiting times or cases prosecuted, for example. Some will say that is precisely the point: instances did not change as hours were reduced. 


Virtually all the qualitative reports are about employee benefits such as better work-life balance, though, not output metrics.

Will Enterprise 5G Follow the Traditional Pattern?

Enterprise private networks long have been a lucrative field for system integrators. And that pattern already seems to be emerging for enterprise private 5G networks.


A study by BearingPoint suggests mobile service providers are winning fewer enterprise 5G projects than anticipated a year ago, while the percentage of enterprise “do it yourself” projects also fell short of original expectations. 


Specialized service providers, on the other hand, are winning vastly more deals than expected a year ago. That category includes system integrators and others who specialize in building and operating private networks. 

source: Bearing Point 


Compared to 2020, a BearingPoint analysis of enterprise 5G suggests less optimism about roles for mobile service providers. In 2020, Bearing Point expected 21 percent of enterprise 5G projects would be led by a mobile service provider. This year, that expectation dropped to 16 percent. 


Involvement by mobile service providers as a secondary supplier also dropped from the 2020 estimate of 40 percent to the 2021 projection of 37 percent. 


Enterprise “do it yourself” rose from 20 percent to 32 percent. 


source: BearingPoint

Tuesday, July 6, 2021

AT&T Divestitures of Warner Media, DirecTV Illustrate Value of a Portfolio Approach

Virtually all analyses of AT&T’s spinning out Warner Media and DirecTV focus on the “back to the core business” implications. Some of us might argue there is a strategic rationale for owning assets as a portfolio, without full control. 


There are many advantages. 


Connectivity providers do not destroy value by insisting every asset contributes directly to the communications revenue stream, products or positioning. If one owned a minority stake in Amazon, Google, Facebook or other assets, one would not insist that those marketplaces, apps or platforms be available “only on our network; only under our brand; only under terms that require buy-through of our other services.” 


Executives are able to grow their businesses as they deem best, unconstrained by mother ship politics or needs. Business leaders often can create greater value faster, as equity multiples are unburdened by connectivity provider valuation metrics. 


Warner Discovery benefits from a pure-play content asset valuation, for example. 


Some companies have made an art form out of holding minority stakes in firms without needing control. In other cases majority ownership also is the case, though an argument can be made that overall value growth (equity value, for example) tends to happen when minority interests are held. 


That portfolio approach arguably makes lots of sense when incumbents seek to expand their growth profiles by moving up the stack into applications, platforms or apps. In a clear sense, AT&T’s spinning out of Warner Media and DirecTV offers some similar advantages.


Merging Warner Media with Discovery gives AT&T a $43 billion cash infusion while removing the asset from its books. But AT&T still will own 70 percent of the new Warner Bros. Discovery asset.


The sale of DirecTV and related video operations to private equity firm TPG raises $7.8 billion in cash, removes the assets from AT&T’s books, but also allows AT&T to retain 71 percent ownership of the asset. 


The move frees AT&T executives to focus on the remaining connectivity core businesses, while retaining exposure to assets that could well produce higher returns than core AT&T. The video entertainment assets generate cash flow, if not equity value growth. 


The point, some of us would argue, is that overall asset value growth, profits and profit margins might arguably be higher using a portfolio approach, compared to retaining all the assets within the communications units. 


For firms such as AT&T that must deleverage, spinning out assets--even if losing control--might well result in greater overall financial value. That might be true even if the original perceived value of full control and ownership was the ability to consolidate revenue, profits and cash flow for the corporate parent. 


True, owning minority stakes in platforms, apps and marketplaces creates overall equity value, but not direct revenue, profit or value for the connectivity unit. But it does create a pathway to diversifying firm value and operations away from strict reliance on connectivity revenue sources.


That might be especially important if one believes the connectivity revenue streams will be under continual pressure, going forward. Many connectivity providers might prefer to say they are something other than “connectivity providers.” 


But doing so has proven to be difficult when new lines of business are forced to operate “inside” the connectivity operation. Let them remain outside. Simply own a portfolio of valuable and growing businesses that complement the connectivity core, without stifling them. If at some point in the future it makes sense to reshuffle assets, missions and roles, that can be done later. 


If one believes the connectivity business will be under pressure in coming decades--struggling to provide value and hence earning high profit margins and gross revenue--then a transition to additional lines of business is essential, as difficult as that might be. 


A portfolio approach has much to offer.


SD-WAN Service Provider Revenue, Though Important, is Smallish

 The SD-WAN infrastructure market has been the larger segment of the SD-WAN market since its inception. Yet, despite its importance for enterprise communications, the market remains small by global standards, representing perhaps $2.5 billion in worldwide sales volume in 2021, according to Markets and Markets.  

source: Verified Market Research 


Others believe the market--managed services and infrastructure sales--was larger: $6 billion in 2020, IDC estimated. 


IDC projects an infrastructure market worth about $4.6 billion in sales by 2022. Globally, that is a relatively small number. That might also imply that the managed services market is less than $2 billion in annual revenues. 

source: IDC 


Some project that service provider sales will be bigger than infrastructure sales in the future, as connectivity provider service revenues have--by some estimates--been bigger than do-it-yourself approaches since 2016. Others would say infrastructure sales are higher than managed service managed service revenues, which might not have reached $1.5 billion in 2020, for example. 


The issue is how much bigger, or how much smaller managed services revenue is, compared to sales of infrastructure.  By some estimates, the managed service portion of the business is nearly the same size as the infrastructure market, which supports both managed and DIY approaches, according to Frost and Sullivan data. 


source: Frost and Sullivan


Basically, SD-WAN is an enterprise networking service that replaces MPLS. So one way of estimating the size of the SD-WAN market is to look at the size of the MPLS market, which might be estimated at no bigger than $32 billion--including infrastructure and managed service approaches--at its peak. 


source: Cisco  


Monday, July 5, 2021

Price Anchoring Matters for Broadband Access

Pricing of broadband access services is not so different from pricing as viewed for any other popular consumer product, luxury goods excepted. Price anchoring applies for list costs of consumer broadband services no less than for other products.  


"Price anchoring" is the reason most consumers able to buy gigabit internet access do not do so. Price anchoring is the tendency for consumers to evaluate all offers in relationship to others. As the saying goes, the best way to sell a $2,000 watch is to put it right next to a $10,000 watch.


Anchoring is why "manufacturer's suggested retail pricing" exists.  It allows a retailer to sell a product at a price the consumer already evaluates as being "at a discount." Price anchoring is why a "regular price" and a "sale price" are shown together. 


In the internet access business, price anchoring explains why gigabit access speeds are priced in triple digits, while low speeds are priced in low double digits, while the tiers most consumers buy are priced in between those extremes.


Service providers who sell a range of internet access products differentiated by speed and price might “typically” find that a minority of customers actually buy the “fastest” tier of service. That is largely because of price anchoring.


People often evaluate a "best quality offer, at highest price" one way against the "lowest quality offer, at lowest price, before concluding that the "best" value is the mid-priced quality, at the mid-tier price.


That was true in the past when the top speed was 100 Mbps as well. Most consumers did not buy the "highest quality" offer, whatever it was.


One study by Parks Associates shows that as speeds climb, the percentage of buyers who think their service is “faster than required” grows. Nearly half of early gigabit speed buyers reported that the service was “faster than they needed.” That means a value gap exists. Consumers perceive that what they are buying provides excess functionality, and they know they are paying for that excess, compared to other offers. 

source: Parks Associates 


The “average” U.K. fixed network internet access bill might range from $33 to $41 a month, with a typical bill costing about $35 a month. 


Keep in mind that 80 percent of U.K. customers buy landline services in a bundle containing at least two services, which might skew comparisons of stand-alone broadband access service prices. In other words, very few U.K. customers buy a stand-alone broadband service. 


Also, customer preferences play a role, as faster services cost more, and demand for faster services is growing. “Ultrafast services (speeds above 100 Mbps) cost around £18 ($25) per month more than the equivalent superfast services with advertised speeds below 100 Mbps,” notes Ofcom. That suggests a typical “above-100 Mbps” monthly cost in the area of $60 per month. 


Prices also might vary by usage allowance, plans offering “unlimited usage” costing more than plans with some usage cap. 


The point is that consumers make rational choices about broadband just as they do for other products. Perceived value might be highest for services in the middle of the speed range, compared to the value of the fastest tier or service or the lowest. 


All that could change rapidly if the cost gap between a 200-Mbps service and a gigabit service continues to narrow, in terms of absolute cost. In such cases the perceived value of the “good enough” solution and the “best” solution--in terms of cost--is relatively narrow.  So the difference between the cost of buying a “good enough” solution and the “best” solution, when relatively narrow, will encourage some amount of trading up to the higher-priced product.


Value nearly always matters.


Sunday, July 4, 2021

MWC in 20 Years?

As Mobile World Congress 2021 unfolds with pandemic-induced reduced attendance, down roughly two thirds from the last per-pandemic year, most of us wonder how the recovery process will unfold.


The practical immediate metric for most of us will be how soon attendance returns to the last pre-pandemic year. 


That all makes sense for the near term. But one logical question we must ask about maturing markets--in connectivity or elsewhere--is how demand for exhibitions and trade shows changes over time. 


Fragmented markets create value for such events, as they aggregate buyers so sellers can affordably reach them. 


In 2016 the NCTA gave up its annual trade show, for example. In 2010 Supercomm, the former big U.S. telco show, was canceled. In both cases, industry maturity made the meetings expendable. CTIA, once the big U.S. mobile industry show, declined to the point where it essentially became a GSMA event in 2017. 


The point is that big annual industry trade shows only make sense when markets are young and fragmented. Once consolidation has happened, the logic for holding them goes away, as sellers know exactly how to reach their buyers. 


So it is not unreasonable to speculate about whether pandemic-induced travel bans and trade show cancellations will, in some cases, accelerate thinking about their value, in markets that are becoming more concentrated. Value, arguably, will remain high in yet-fragmented markets. 


Such processes unfold over decades, but are a clear pattern. Consolidated markets have less need of big trade shows.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...