Monday, April 13, 2020

Highly Non-Linear Phenomena Mean We Tend to "Fight the Last War"

The problem with non-linear phenomena is that it is hard for the human mind to process. In all likelihood, many parts of the United States, for example, already have passed the peak rate of Coronavirus infection. Assuming all our social distancing works, the falloff should be as dramatic as the uptake. 


Ironically, just at the point where people and businesses are moving to mandate use of masks, we arguably need them less than we would have, earlier in the epidemic. Fast-moving, non-linear processes do that to us. Much as generals always plan to “fight the last war,” our responses to the pandemic seemingly lag the state of reality. 


source: University of Washington


To put it in personal terms, because of the hypothesized incubation period, and because of the social distancing most of us have followed, the risk of new infection actually now is quite low. That does not mean my own social distancing ends, or lets up. That does not mean I stop disinfecting. It simply is to say that the period of greatest danger has likely passed, now followed by a period of rapidly-decreasing risk. So instead of projecting the present state into the future, as though it were unending, I actually need to start thinking practically about the small steps back to normality, and what precautions are prudent even longer term, keeping in mind the highly non-linear nature of the threat. It does not make sense to operate post-epidemic as I did at the peak of the epidemic. The better issue is what practices will continue, virtually indefinitely, and at what level. Hand washing and use of sanitizing wipes likely are permanent.


Avoiding large crowds, when possible, likely will continue for some time. The more immediate issue is when to recommence travel by airplane. At what point is that an acceptable risk?


Most likely, we all are going to overshoot, and we delayed taking our most-aggressive action at the onset. We will persist in taking precautions that actually are out of proportion to the degree of actual threat. Non-linear and fast-changing circumstances will do that.


The Use and Misuse of Statistics

The use and misuse of statistics is an ever-recurring issue. Consider only the issue of how to count internet access availability or take rates. The former is a measure of supply, the latter a measure of demand. “Availability” means a consumer  can buy a product. “Access” means a customer does buy it. People confuse the two concepts, routinely. 


Ignore sampling errors or limitations for the moment.  Ignore the impact of definitions, which change. At one point, 10 Mbps was the top speed on a fiber to home network. Today we define “broadband” as a minimum of perhaps 25 Mbps downstream, and the definition will slide higher over time. The point is that we never are comparing apples to apples, over time. 


In isolated and rural areas, where there often is no business case for supplying such access, service only is available if subsidized, and so the issue of “how fast” is important, but arguably secondary to “availability.” We prefer equivalent grades of service everywhere, but the economics of supply mean that rural availability lags urban, virtually always. 


We frequently also often ignore some platforms entirely, as when we measure “fixed network” availability, but omit the additional coverage supplied by satellite providers or mobile networks. Sometimes it is not clear that wireless fixed networks are counted with other fixed networks. 


It arguably is one thing if a potential customer cannot buy a product; quite another thing if a potential customer chooses not to buy. The first might be considered a failure of policy; the latter a consumer exercise of choice. 


One also has to ignore lag times between data collection and publication. Most government data shows what was the case two years ago, not the situation as it stands today. So three years ago, using a minimum standard of 25 Mbps for “high speed,” perhaps 30 percent of “households” did not buy--or perhaps could not buy--the product. 


People often mistake “households” for “people,” as well. This illustration, using 2017 reporting data, says “30 percent of U.S. households don’t have a fixed high-speed internet connection.” 


That is wrong. The Federal Communications Commission figures for 2017 stated that 21.3 million people did not have access at that speed, not households. That overstates the degree of “lack of access” by more than 100 percent, as the typical number of people in a U.S. household is greater than two. 


source: Karma


This is a common error one sees in reports about the size of the digital divide. If one adds two satellite providers to the mix, there is almost no place within the continental landmass not already served by at least two networks selling 25 Mbps service, whatever the limitations of fixed networks in some locations. 


Of course, our goals always are aspirational. Most urban consumers consider 25 Mbps a problem. In my own household, anything less than 50 Mbps triggers the registration of a service issue report and an immediate reboot of the router. As a practical matter, even speeds below 100 Mbps might trigger a reboot. 


The point is that availability--the ability to buy internet access--is not generally a problem. I know people who live in isolated mountainous areas where neither fixed line service nor mobile service is available. They use their mobiles only when “in town.” But those people also choose not to buy satellite internet access. They could buy it; they simply choose not to do so. 


Speed and cost are issues, to be sure. Rare is the wireless platform that will match a hybrid fiber coax network or a fiber to home network in terms of speed or cost per bit. 


The point is not the definitions we use--as those change over time, and should change--so much as the misuse of terms. Availability is one matter; take rates another. People are one matter; households or locations another. 


One frequently sees and hears figures that confuse those concepts, with real implications for the meaning of the data. In the end, we care about take rates. Availability is a measure of our ability to support take rates. But there are grey areas. 


We want reasonable quality services and reasonable prices. That always is hard to do in rural areas. But even in urban areas, when quality and price are not issues, some customers still choose not to buy some services. They might prefer a mobile-only approach to buying fixed access, for example. 


Assessing trends in the real world is hard enough. It never helps when we are simply misapplying statistics (unintentionally, perhaps)  to make a point.


Sunday, April 12, 2020

Will Business Customer Revenues Drop, Because of Pandemic, Longer Term?

It would not be unexpected to see a near-term dip in business customer revenues as a result of the Covid-19 pandemic, caused, if nothing else, by a huge number of small business bankruptcies, which removes that demand from the system. Also, since economic contraction depresses demand and therefore revenues, a post-Covid recession also will work to inhibit demand, and therefore revenues. 


That was what happened in the wake of the 2008 recession, for example. 

source: Analysys Mason


Some expect a slight dip in revenue or slow growth in the wake of the pandemic. Some services will slow more than others, but often as an acceleration of already-existing trends. Generally speaking, what was growing before the pandemic will keep growing; what was declining might shrink faster. 


Consumer spending might prove more resilient, as people tend to spend about the same amount of money, year in, year out, on connectivity services. Telecom service provider revenues did not change much in the wake of the Great Recession of 2008. In fact, according to some studies, U.S. consumer spending on communications actually grew, overall, in the wake of the Great Recession, for example. 


Whatever the immediate, short-term impact, it would be reasonable to expect the underlying prior trends to reassert themselves within a couple of years--possibly sooner--where it comes to connectivity services.


Work-From-Home at a Massive Level Might Reduce Productivity, Early Evidence Suggests

The massive shift to work-at-home caused by policies related to the Covid-19 pandemic have inadvertently provided a remote-work statistical base we will be analyzing for years, especially regarding the productivity impact of massive work-from-home changes. 


Most past studies of work-at-home productivity arguably involved smaller sets of workers in functions that arguably are best suited to remote work (sales, coding, marketing, accounting, legal work and so forth). 


What the global pandemic stay-at-home orders have done is push the bulk of enterprise workforces to either work at home or not work. The early data from the change is not encouraging for productivity impact, suggesting that the tools we have are not so much the problem as human ability to adjust to remote work environments and use the tools fully. 


If it is the case that only a third of jobs can be done remotely, forcing everyone to do so will not be universally productive. say professors  Jonathan Dingel and Brent Neiman of the University of Chicago Booth School of Business, who conducted a recent study on the subject.


The study suggests 34 percent of U.S. jobs can plausibly be performed at home. Assuming all occupations involve the same hours of work, these jobs account for 44 percent of all wages. The converse is that 66 percent of jobs cannot plausibly be shifted to “at home” mode. 


As you might guess, some jobs and some areas are more amenable to remote work. The top five U.S. metro areas feature many jobs in government or technology that could be done from home. On the other hand, some areas involve manufacturing, agriculture, raw materials extraction of other major industries that are not amenable to remote work. 

source: Dingel and Neiman


“More than 40 percent of jobs in San Francisco, San Jose, and Washington, DC could be performed at home, whereas this is the case for fewer than 30 percent of jobs in Fort Myers, Grand Rapids, or Las Vegas,” they say. 


Professional, scientific and technical services, management jobs, education, finance, insurance and information jobs are easiest to shift to remote work. Transportation, warehouse operations, construction, retail, agriculture, food services and lodging are among the hardest to shift to remote work. 


The new conventional wisdom is that more remote work is coming, as a permanent change after all the stay-at-home rules put into place to deal with the Covid-19 pandemic. But there is some debate about whether remote work is less productive or not. And if remote work turns out to be less productive or more productive than face-to-face work, there will be consequences for its extension and use. 


Looking only at the impact of the massive stay-at-home orders to counter the Covid-19 pandemic, there is at least some evidence that productivity has suffered, in some countries, because of remote work from home. 


Aternity, for example,  has aggregated from millions of employee devices from over 500 Global 2000 companies, reveals that the United States has become less productive due to remote work because of the pandemic. The metric is hours of work, captured because Aternity hosts a cloud-based analytics application that captures work-related application usage. 


At the end of March, 77 percent of work has been moved to be performed remotely in North America, the largest amount of any continent. The North America trends were bifurcated. U.S. enterprise worker productivity actually dropped 7.2 percent, Aternity reports, though Canadian productivity increased about 23 percent. 


“Overall productivity (as measured by hours of work computing time) in Europe declined by 8.2 percent,” according to Aternity. 


source: Aternity


Another study of worker attitudes suggests that about half of workers 18 to 24 believe their productivity is lower when working from home, according to a study by National Research Group. Half also believe they are distracted at home. That does not necessarily mean productivity is lower, but the workers feel their productivity is lower. 


Some believe remote work, in some cases, is wildly less productive. A study by Scikey MindMatch that estimates only 0.2 percent of the Indian IT workforce actually is capable of working from home at high levels of productivity.


That finding might run counter to what many observers would expect for remote work productivity, but Scikey describes itself as a firm supporting firm efforts to attract personnel that drive “high-performing teams.” 


Since talent, skills, intelligence and ability to perform work at a high level remotely  are bell-shaped curves (a normal distribution), people who might be described as “high performing” would be expected to be a minority of all workers. 


The Scikey study seems to be operating out at three standard deviations, which would represent 0.3 percent of people. 


source: Researchgate


Reports about the study indicate that  99.8 percent of the workforce in the information technology sector is incapable of working from home, at least with very-high productivity arguably matching what happens at the workplace, the study claims. 


The reason so many are “incapable” of working from home is that they lack at least one quality deemed essential for success, including resistance to learning and exploring (95 percent), lack in practical communication skills (65 percent) and lack in planning and execution (71 percent).


Some 17 percent of the employees are instruction-driven and therefore they need clear and direct instructions to work their best. about 12.7 percent of the employees are very much dependent on their social interactions, and working from home comes as a real challenge for them. Work is not difficult for them, but social interactions are necessary for them to function, Scikey suggests. 


What the study likely indicates is simply that the human characteristics Mind Match associates with the highest-performing individuals in a remote work setting are three standards deviations from the mean. 


You can make your own assessment of whether that is a functionally valid test of worker suitability for remote work. 


Saturday, April 11, 2020

Are 99.8% of Indian IT Workers Really Unable to Work from Home at a High Level?

Skepticism is likely called for whenever any survey, on any subject, produces highly-unusual conclusions. That probably is the case for a study by Scikey MindMatch that estimates only 0.2 percent of the Indian IT workforce actually is capable of working from home at high levels of productivity.


That finding might run counter to what many observers would expect for remote work productivity, but Scikey describes itself as a firm supporting firm efforts to attract personnel that drive “high-performing teams.” 


Since talent, skills, intelligence and ability to perform work at a high level remotely  are bell-shaped curves (a normal distribution), people who might be described as “high performing” would be expected to be a minority of all workers. 


The Scikey study seems to be operating out at three standard deviations, which would represent 0.3 percent of people. 


source: Researchgate


Reports about the study indicate that  99.8 percent of the workforce in the information technology sector is incapable of working from home, at least with very-high productivity arguably matching what happens at the workplace, the study claims. 


The reason so many are “incapable” of working from home is that they lack at least one quality deemed essential for success, including resistance to learning and exploring (95 percent), lack in practical communication skills (65 percent) and lack in planning and execution (71 percent).


Some 17 percent of the employees are instruction-driven and therefore they need clear and direct instructions to work their best. about 12.7 percent of the employees are very much dependent on their social interactions, and working from home comes as a real challenge for them. Work is not difficult for them, but social interactions are necessary for them to function, Scikey suggests. 


What the study likely indicates is simply that the human characteristics Mind Match associates with the highest-performing individuals in a remote work setting are three standards deviations from the mean. 


You can make your own assessment of whether that is a functionally valid test of worker suitability for remote work.


Gatekeeper Role Diminishes

Barriers to customer switching are considerably lower now than in the past, reducing the power gatekeepers have over their customers, and shifting the ways some control still can be exercised.

Set-top box interoperability is one way to reduce the power of video service vendor lock-in and make switching providers easier. In the past, control of the conditional access function provided by such set-tops was perceived to be a source of business advantage by cable operators.

All that has started to change with the advent of over-the-top video subscriptions, though, which require no dedicated set-top box, only use of an application which itself supplies the conditional access functions, and then an internet-connected TV or streaming stick.

As the number of global linear video subscriptions grows, shipments of set-top boxes set-top boxes are growing, though. But those boxes are less costly.

Moore’s Law and competition have helped reduce prices for set-top boxes. Service provider profit pressure requires them to get operating costs down, and that includes the cost of customer premises equipment. But it also is the case that set-tops play a smaller role in driving revenue than once was the case.

Set Top Box Price Forecast


source: Technology Futures

These days, revenue growth and profit margins are driven by internet access services, not subscription video or voice. So the value of the set-top also is diminished. That is not to say the box has little value. In fact, Comcast’s X1 boxes supply many functions that add value to the user experience, beyond allowing access to the programming.



source: Technology Futures

Not only have telcos, cable companies, internet service providers and satellite firms become less essential “gatekeepers,” but their roles in the ecosystem as distributors likewise wanes. In an internet ecosystem, once the internet access is in place, each app provider can function without a distributor in the value chain.

And that has business implications for former distributors as well as all app suppliers.

Over time, some distributors will become app providers, altering their roles in the value chain. Moves by Comcast, AT&T and others to become content producers and copyright owners provides a clear example. The role of Peacock owner (Comcast) or HBO Max (AT&T) comes not from the distributor role but from the app provider role.

Functionally, even traditional linear TV subscriptions have become apps. They might be owned by the firm that supplies the internet access, but since that product already can be purchased separately from the internet access, even linear video is functionally an application.

Note the change in distribution once a service evolves from linear to internet delivered: where the service footprint was bound by the franchise areas where a cable company, telco or ISP has access networks, the internet apps can be purchased by anyone with internet access in any country where the app is lawful.

HBO Max, in other words, can be purchased by customers who do not buy AT&T internet access or voice services or mobility services. One of the defining attributes of an internet app is that it can be sold or used anywhere internet access is available, and where the app is lawful.

So one obvious implication of any connectivity provider becoming an app provider is that the former geographic bonds are slipped. Voice and video providers using fixed networks need government permission to operate in specific areas, and are not allowed to operate outside those areas.

Over the top services and mobile operators can, in principle, operate nationwide, or internationally, if other governments permit it. In traditional parlance, that means operating outside the franchise area, not just inside the territory where permission to operate is granted.

To be sure, device interoperability is helpful in reducing switching behavior by customers. But other trends--especially the shift to internet delivery--are reducing barriers to switching behavior, in any case.

Friday, April 10, 2020

The New Normal Will Not Last; in 2 to 4 Years, Normal Will Reassert Itself

One hears talk of post-Covid-19 new normal that results in permanent changes in business and consumer behavior. 


But we heard the same analysis in the wake of the 2008 great recession. Permanent slow growth was supposed to be the new normal.  It did not last. 


source: Statista


source: Statista


Some confidently predicted that U.S. firms would never again make as much use of leverage as they had going into 2018.  In fact, trends in use of leverage in financial markets suggest there is nothing permanent about new business attitudes towards financial leverage.  Use of leverage soared, post-great recession, to new levels. 


source: Seeking Alpha


Likewise, many analysts suggested consumer behavior had fundamentally changed as a result of the 2008 recession. Consumers would remain wary of debt, it was suggested. That also proved to be incorrect. To be sure, consumers were more cautious for half a decade, but eventually returned to their old ways. 


source: Marquette Associates


Consumer saving rates grew in the wake of the great recession, but only for perhaps four years. 


source: Federal Reserve Bank


The cruise industry saw price declines in the wake of past recent recessions or event shocks as well, and most expect a slow recovery for cruise line activity in the wake of the pandemic. But other shocks--the internet bubble collapse, the SARS epidemic, the 2008 recession and Costa Concordia disaster of 2012 all led to price weakness. It took about five years for prices to recover. 


source: Market Insider


After the 2008 recession, consumer spending was back up to pre-recession levels. 

source: Bureau of Economic Analysis


To be sure, the recession of 2008 likely accelerated trends that already were happening, but also lead to at least a temporary emphasis on simplicity, thrift and fickle changes in preferences. Green and ethical consumerism would wane, some predicted. None of those trends lasted


So despite all the talk of the new normal, we are likely to see a reversion to the mean after a few years. Linear extrapolations from current behavior are likely to be wrong, as they have been wrong in the past. The new normal will not likely last very long. Eventually, we will return mostly to the original normal, with the caveat that all underlying fundamental trends are likely to remain intact. 


AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...