Sunday, July 11, 2021

Pearson's Law--Not Productivity Improvements--Is What We Now See from "Work From Home"

Some of us seriously doubt we can deduce anything at all about short-term changes in productivity for work at home in most cases of knowledge or office work. The reason is Pearon’s Law.



Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


What we almost certainly are seeing, short term, is a performance effect, as Pearson’s Law suggests. 


 The exceptions include call center productivity, which is easier to quantify, in terms of output. 


Many argue, and studies maintain that remote work at scale did boost productivity. One might argue we actually have no idea, most of the time. 


That workers say they are more productive is not to say they actually are more productive. 


Also, worker satisfaction is not the same thing as productivity. Happy workers can be less productive; unhappy workers can be more productive. This is an apples compared to oranges argument, in all too many cases.  


With the caveat that subjective user reports are one thing, measurable results another, we likely must discount all self reports, whether they suggest higher, the same or lower productivity. 


The other issue is the difficulty of measuring knowledge work or office work productivity. Call center activities are among the easiest to measure quantitatively, and there is evidence that remote call center workers are, indeed, more productive. Whatever the case quotas, call center workers tend to finish those up faster when working at home. 


There is some evidence that work from home productivity actually is lower than in-office work. In principle--and assuming one can measure it--productivity increases as output is boosted using the same or fewer inputs. 


An initiative in Iceland, which has notably low productivity, suggests that service productivity by units of government does not suffer when working hours are reduced, and at least over the short term. Among the issues--aside from whether we can actually measure productivity in the studied settings--is Pearson’s Law at work. 


To be sure, service sector productivity is devilishly hard to measure, if it can be measured at all. It is hard to measure intangibles. And there is some evidence that satisfaction with public sector services is lower than private services, and substantially lower for many types of government services. 


Productivity is measured in terms of producer efficiency or effectiveness, not buyer or user perception of value. But it is hard to argue that the low perceived quality of government services is unrelated to “productivity.” 


source: McKinsey 


And what can be measured might not be very significant. Non-manufacturing productivity, for example, can be quite low, in comparison to manufacturing levels. 


And there are substantial differences between “services” delivered by private firms--such as airline travel or communications-- and those delivered by government, such as education, or government itself. 

 

The study argues that reductions in work hours per week of up to 12.5 percent had no negative impact on productivity. Methodology always matters, though. 


The studies relied on group interviews--and therefore user self reports--as well as some quantitative inputs such as use of overtime. There is some evidence of how productivity (output) remained the same as hours worked were reduced. 


For public service agencies, shorter working time “maintained or increased productivity and service provision,” the report argues. 


There is perhaps ambiguous quantitative evidence in the report of what was measured or how it was measured. The report says “shifts started slightly later and/or ended earlier.” To the extent that productivity (output) in any services context is affected directly by availability, the key would be the ability to maintain public-facing availability. The report suggests this happened. 


But the report says “offices with regular opening hours closed earlier.” Some might question whether this represents the “same” productivity. Likewise, “in a police station, hours for investigative officers were shortened every other week.” Again, these arguably are input measures, not output measures. 


So long as the defined output levels were maintained, the argument can be made that productivity did not drop, or might formally have increased (same output, fewer inputs). In principle, at least over the short term, it should be possible to maintain public-facing output while reducing working hours. Whether that is sustainable long term might be a different question. 


The report says organizations shortened meetings, cut out unnecessary tasks, and reorganized shift arrangements to maintain expected service levels. Some measures studied were the number of open cases, the percentage of answered calls or the number of invoices entered into the accounting system. 


In other cases the test seemed to have no impact on matters such as traffic tickets issued, marriage and birth licenses processed, call waiting times or cases prosecuted, for example. Some will say that is precisely the point: instances did not change as hours were reduced. 


Virtually all the qualitative reports are about employee benefits such as better work-life balance, though, not output metrics.


And Pearson’s Law still is at work.


Connectivity Networks are Becoming Computer Networks: What That Could Mean

“5G represents a paradigm shift, where the telecom industry is now taking substantial steps towards using the same building blocks as the IT industry,” says Ericssson. That is another way of saying telecom networks are becoming computer networks. 


And as networking is organized using a layered model, so too might all business processes be layered. 


source: Lifewire 


Think of the analogy to the ways all lawful applications run on IP networks: they use access supplied by third parties, with no required business relationship between the app providers and the access providers. 


To be sure, one entity might own both the transport network and the app, but that is not required. Google owns YouTube, Google search and Google Maps, which in part are transported over Google’s own global IP network. But common ownership is not required.


AIn the same way, telcos and cable TV companies own some lead apps, and also own access networks. But the relationship is not mandatory. They could own apps as well as networks. Those apps could be delivered over third party networks as well as their own networks. 

source: Ashridge on Operating Models


The point is that business operations are supported as layers on top of transport network layers. But those business and transport functions are logically separated. Ownership also is logically separated. 


In the future, that might allow different ways of structuring connectivity provider operations. In a sense, the way Comcast operates its theme parks, content studios and programming networks separately from its access networks provides an example. 


Each of those businesses runs independently of the access networks, though all have common ownership. 


source: Illinois Department of Innovation and Technology  


All that might have profound implications for the ways tier-one connectivity providers run their businesses. Connectivity providers run networks to support their core revenue-generating applications: broadband access, voice, business networks and content. 


As a practical matter, the network-operating functions increasingly are logically distinct from the application functions, as a Wi-Fi network is distinct from the apps using it. Perhaps the layers are not quite as distinct as they would be at Google or Facebook, where the app creation and business functions are logically distinct from the ownership and operation of core networks. 


But the principles are the same: all modern computer networks are based on separation of functions: logical separated from physical; higher layers isolated from lower layers; applications separated from networks. 


The obvious implication is that, over time, connectivity operations will more closely mirror the way all other networks work: transport functions separated from application functions; network functions logically separated from application, use case and revenue models. 


Historically, connectivity providers have bundled their core app or apps with construction and use of the network. In the future, as computer networks, those relationships could change. 


Already, any broadband access network allows lawful apps to be run on the connectivity network, with no business relationship between app owner and network owner. In the future, that might be further developed.  


The perhaps-obvious line of development is to further isolate business operations from the network, as Google’s YouTube, search, messaging, maps, Android and other business units are separated from the operation of Google’s own network. 


source: CB Insights


Assume a future where whole businesses (Google Maps, search, Android, Nest, Chromebook; Verizon mobility, voice, internet access, enterprise and business operations) are run independently of the transport and access networks. 


“Networks” are a service provided to the businesses, not a direct revenue generator. That is precisely how current telco or cable operations are structured already. Revenue is generated by services and apps sold to customers. The network exists only to facilitate the creation and sale of those apps. 


In principle, there no longer is any reason why applications and services need to be--or should be--developed or created to run solely on “my” networks. The bigger opportunity is to own apps and services that run on anybody’s network. 


Few would consider it “better” to create internet of services apps, platforms or services that only work on a single access provider’s network. It clearly is “better” if the platform, apps and services run on any access network, anywhere. 


But that requires a change not only of mindset but of business strategy. Today, most effort is spent trying to create value for things done “on my network.” In the future, some might do better creating value for apps, services and platforms that work anywhere, on any network. 


That assumes the continued existence of multiple competitors able to pursue such strategies. If competition is not the future connectivity framework, few if any access and transport providers will be allowed to spend much energy developing platforms, services or apps that run anywhere, on any network.


Instead, effort will revert to pre-competitive, monopoly objectives: just create and operate a competent access network.


Can Lumen Find a Buyer for Rural and Copper Access Assets?

Lumen Technologies has said it is willing to consider divesting non-core assets that could include up to 18 million access lines, mostly found in rural and other lower-density areas. That might be a tall order. 


The traditional rule of thumb for fixed networks in the U.S. market is that service providers make money in dense urban areas, break even in suburban locations and lose money everywhere else, including rural areas. The same sort of logic applies to fiber to home facilities: FTTH always makes most sense in urban areas, sometimes makes sense in suburban areas and most often requires subsidies in rural areas. 


That likely still is a reasonable assumption, both in facilities-based competitive markets as well as those based on a single network and wholesale access.


So Lumen has to find a buyer willing to bet it can take the “least desirable” fixed network service territories and upgrade them for higher-performance broadband access, relying on that one anchor service to support the business model. 


That represents a key change in payback models for fiber to home investment. For a few decades, the business logic was that the FTTH upgrade would be driven by a few anchor services: broadband, voice and video entertainment. 


Ironically, the justification for fiber-based networks supporting internet protocol was that they could “support any media type.” The new assumption is that a new FTTH network will mostly be supported by broadband access. Most independent internet service providers, for example, have migrated away from offering voice plus broadband, or broadband plus video, to offering broadband only. 


And, of course, it is harder to create an attractive payback model based on a single service than on two or three relatively popular services. 


Whether in a wholesale or facilities-based competition model, where the hope once was that three anchor services  would support the business model, it increasingly is the case that a single revenue stream--internet access--anchors the payback model. That broadband-led model arguably requires much more stringent cost control than an incumbent cable or telco business model. 


To be sure, telcos and cable operators continue to earn significant revenues from either voice or video services. But internet access is viewed as the revenue driver going forward. All of which makes the facilities-based independent internet service provider business model so relevant. 


The issues include not just infrastructure cost but also competitive dynamics. In an overbuild situation any independent provider of new FTTH services must compete against two incumbents. Even when that is not the case, few telcos can expect to grab more than 40 percent to 50 percent market share of broadband connections. 


In the more-favorable two-provider scenario, it will be tough for a telco to justify an FTTH business case based primarily on the value of internet access services, though some independent ISPs, with lower cost structures, claim they can make a profit even at low housing densities. 


The total cost to build FTTH systems in rural Vermont was about $26,000 per mile, which included absolutely everything (NOC, main pass, laterals, drops and customer installs for six customers per mile) and a 12 percent contingency cushion, according to Timothy and Leslie Nulty of Mansfield Community Fiber. 


“Just six paying customers per mile can be profitable,” they argue. Vermont’s density averages 12 people per mile, so six paying customers means a 50 percent take rate, which we always achieved by the end of the third year.


Other estimates suggest per-mile costs in the $18,000 to $22,000 per mile range, so the Mansfield figures do not appear out of line. On a “homes passed” basis, some estimate a network cost of less than $700 per passing in urban and suburban areas. In rural areas the cost per home might be in the range of $3656. 


source: Cartesian


Those costs typically do not include the additional cost to serve a customer, which might double the full cost per customer. Most would agree equipment costs have declined over the past decade, though construction costs arguably have not. 


Assume costs in urban areas ranging between $670 per passing and $1313 per passing, representing perhaps 70 percent of all households. 


Assume customer premises equipment and installation labor adds $600 to the cost of serving an internet access customer on an FTTH network. Assume 40 percent take rates and an average cost per passing of $1,000. 


That implies a cost per customer of about $2500. Assume internet access revenue is $80 a month, or $960 per year. Payback on invested capital might take a while, assuming 20 percent net margins after loading marketing, operations and other costs. Annual net profits might be as low as $192 per customer in that scenario, with break even happening in a decade and a half or so. 


That will be a tough proposition. Independent ISPs operate with higher margins because their costs are far lower. Telcos and cable companies, on the other hand, do have additional revenue streams (voice and video). 


All those are issues Lumen Technologies faces as it ponders the sale of its copper-based networks in rural and other lower-density areas. 


Rules of Thumb About Mobile Capacity Expansion Might be Changing

Some changes to the connectivity business model are obvious; others more subtle. The ubiquity of mobile services is obvious, as is the growth of internet access and the waning of fixed network voice and entertainment video.

But other changes happen over such long periods of time that a generation or two can live with a new reality without noticing the differences. There was, for example, a time when the internet did not exist; when PCs and mobile phones did not exist. 

Less obviously, the ways mobile network capacity gets created have changed. Some of those ways reduce capital investment and operating costs. 

Historically, there are three ways mobile operators have created more capacity on their networks: get new spectrum; use more spectrally-efficient technologies and move to smaller cell sizes. In the 4G era a new tool emerged: use of unlicensed spectrum to offload traffic to local networks. 


Buying additional spectrum and shrinking cell sizes obviously increase capex. Shrinking cell radii 50 percent quadruples the number of cells, for example. Deploying new radios and using new modulation schemes arguably is relatively neutral as a cost driver.


Use of unlicensed spectrum, on the other hand, clearly reduces both capex and operating expense. The spectrum does not have to be bought; the radios do not have to be installed or operated; and third parties pay for energy consumption.


5G brings advances in using unlicensed spectrum, particularly in the area of allowing aggregation of available unlicensed spectrum to licensed spectrum resources.


Prior to the 4G era, it can be argued that smaller cell sizes and radio technology or modulation advances have created more usable capacity than new spectrum allocations. But widespread Wi-Fi offload has changed the toolkit. Wi-Fi offload might account for 30 percent to 40 percent of customer data consumption. 


During the Covid pandemic the percentage of consumption shifted to Wi-Fi was certainly much larger than that. In the 5G and succeeding eras, the ability to aggregate unlicensed spectrum to licensed spectrum will be an important new source of effective capacity. 


source: Science Direct 


It is not yet clear how well that pattern will hold up in the 5G and coming eras. Though both network densification (smaller cells) and new spectrum resources will be applied, in addition to better radio technology and more advanced signal modulation, new spectrum allocated will be discontinuous.


From 1947 to 2017, allocated mobile spectrum doubled about every 8.6 years. The 5G auctions have broken the scale.


In large part, new spectrum allocations have been relatively small and incremental. The allocations for 5G are discontinuously larger, involving both larger amounts of spectrum per auction and also much more effective bandwidth per unit. 


Simply, capacity is related to frequency. The higher the frequency,  the higher the potential bandwidth.  


source: Lynk 


Spectrum auction behavior also shows that price per unit decreases as frequency increases, with several drivers at work. Higher-frequency spectrum simply involves more capacity per unit, but also requires more-expensive (denser) networks. So spectrum value is partly the result of expected costs to deploy networks using that spectrum


Historically, the highest prices were obtained for spectrum with good coverage capabilities, hence lower infrastructure cost. Business models also play a role. The problem mobile internet service providers face is that customers require more bandwidth every year, but are generally only willing to pay the same amount.


source: Lynk 

So additional bandwidth is a cost of remaining in business, not necessarily a driver of incremental revenue. Also, relative scarcity plays a role in setting value and prices per unit. Low-band spectrum was the most scarce. Mid-band spectrum is less scarce and high-band (millimeter and above) is relatively plentiful. As always, scarcity increases prices. Abundance reduces prices.

The point is that the traditional rules of thumb about how mobile network capacity gets increased might have changed. Better modulation and radios; new spectrum allocations and smaller cells still are three ways capacity gets increased. 


But use of unlicensed network capacity has become a fourth tool. Even if, historically, smaller cell sizes have driven most of the capacity increase, there will be more balanced improvements in the future, relying much more on the use of additional spectrum, licensed and unlicensed. 

Saturday, July 10, 2021

IBM Envisions all the World's Cloud Resources Easily Usable as Though it Were One Machine

Methodology Matters

Most of us--at least when it suits our purposes--believe decision making is enhanced by the availability of good data. And most of us likely would agree that methodology matters when gathering data. 


So notes Ookla in reviewing data on broadband speeds described in a recent report.  “Our concern with the rest of the report is that the network performance test results the report was derived from painted an inaccurate picture of what constituents were actually experiencing in the district.”


“The results presented greatly underestimated the speeds being delivered by the service providers throughout most of the study area while overestimating some others,” said Ookla, which compared its own data with that supplied by M-Lab in the report. 


“The speeds measured by Speedtest for the same areas and the same time period are dramatically higher in most areas, indicating that additional infrastructure investments are unnecessary where constituents can already achieve network speeds that meet FCC minimums,” said Ookla. 


There is more than one way to calculate an average.  The “mean” average is the sum of all measurements divided by the number of records used. “This number is valuable, but it can be influenced by a small portion of records that may be extremely high or low (outliers),” said Ookla. “As fiber is installed within an area, a significant number of tests from ultra-high-speed connections can skew mean averages up.”


The opposite also can occur. “M-Lab vastly under-reported the network throughput in every single ZIP code represented in the congressional report,” Ookla said. 


“The ZIP code showing the least amount of difference by percentage between Ookla and M-Lab data was 13803 (Marathon) where M-Lab’s recorded median was 5.5 Mbps and the median from Ookla data was 14.5 Mbps,” Ookla noted. “So the typical speed in Marathon measured by Ookla’s Speedtest was over two and a half times as fast as the average measurement captured by M-Lab.”


“On the other end of the scale, in Whitney Point, M-Lab’s recorded median was 0.9 Mbps while Ookla measured a median of 71 Mbps, almost eighty times faster,” the firm said. 


“It is clear from these results that M-Lab’s performance test does not measure the full capacity of a network connection and thus does not accurately reflect the real-world internet speeds consumers are experiencing,’ said Ookla. 


“These disparities in measured speed generally arise because some network data providers have low user adoption among consumers, limitations in their testing infrastructure, questionable testing methodologies, or inadequate geolocation resources to precisely locate where a given test was taken,” said Ookla. 


“These disparities in measured speed generally arise because some network data providers have low user adoption among consumers, limitations in their testing infrastructure, questionable testing methodologies, or inadequate geolocation resources to precisely locate where a given test was taken,” Ookla added.


The B2B Sales Journey Has Changed

The business-to-business buyer journey has changed. As in the past, B2B transactions remain complex, with multiple influencers and decision-makers, with many rounds of research, evaluation and stakeholder engagement work required. 

 

Since the Covid pandemic, when person-to-person meetings were largely impossible, means the B2B purchase journey has been streamlined. There is less distinction between marketing and sales. Timelines often are compressed. Buying authority is more decentralized as “computing as a service” can be bought with a credit card. 

 

Buyers still must identify the business need, research solutions, evaluate options and reach a decision. But buyers are doing more of that online and on their own.


Enterprise sales have in the past largely relied on field sales. But change is happening. Perhaps a third of business-to-business buyers might be willing to conduct fully-virtual transactions for new products up to a value of approximately USD 500,000, according to a McKinsey report. 


And marketplaces, ecosystems and platforms can make a huge difference. PCCW Global, using an automated system for sales to settlements, “gained over 800 customers in the last 18 months, with growing traction, without any actual sales contact,” Halbfinger said. 


“We don’t even have to know who the customer is,” he added. Sales come from third parties or online, direct from the trading platform PCCW Global uses. 


B2B sales have evolved as virtual marketing, sales, fulfillment and settlement evolve using artificial intelligence and other digital tools. Those themes, and many more, are featured in a PTC Webinar Series: Frictionless Business™  on How B2B Sales Will Change, Post Covid




Featured panelists included:

  • Matt Bramson, Founder & Managing Partner, Cloud Strategy Solutions, USA

  • Marc Halbfinger, Chief Executive Officer, PCCW Global, Hong Kong SAR China

  • Nancy Ridge,  Founder & President, Ridge Innovative, USA

  • Elmar Rode, Director Communications Industry Strategy Group, Oracle, Germany

  • Gary Kim, IP Carrier principal, acted as moderator


Available on 12 July 2021 to PTC members, the series will be available on YouTube in about 30 days. Other episodes in the series already are available for immediate viewing.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...