Thursday, September 9, 2021

Agility Prepares Organzations for 3 of 4 Types of Business Risk

All knowledge can be categorized in four boxes, since the time of the Greek philosophers, many argue. The same process is used by some analysts of risk and implicitly guides everyday business behavior. 


Whether something is known or unknown makes a difference for risk mitigation or management, and also provides a key rationale for developing greater organizational agility. But that requires work and commitment, as a rational executive is going to focus most of his or her time on dealing with “known known” types of risk. 


The cost-benefit of preparing for other types of risk is so low that most will spend relatively little time on them, with the possible exception of “known unknowns,” where research might plausibly convert an unknown to a known. 


A known known can be statistically modeled and behavior based on statistical odds of occurrence. 


Processes we understand we know can be statistically modeled. We can make assumptions about likelihood. We can set insurance rates, for example, or devise plans to take market share from a specific competitor. 


When there are processes we know about, but cannot predict, we conduct research to try and eliminate the uncertainty.


“Unknown knowns” involve processes we know exist, but do not deem relevant to us. There is little or no perceived risk, so organizations do not plan for or worry about such matters, as the risk is deemed so rare. 


“Unknown unknowns” are quite dangerous, as nobody recognizes there is any danger. When “one does not know what one does not know,” any rational search for answers will be thwarted.  


Unknown unknown vs. known unknown chart

source: Veritas 

 



A known unknown cannot be accurately modeled, so an organization has to aim for agility, the ability to shift and change if and when the magnitude of an event is large. 


It is impossible to plan, in practical terms,  for an unknown unknown. These are the sorts of catastrophic changes which can imperil an organization’s existence. 


Unknown knowns pose risk because an organization might be aware of the risk, but deem it so unlikely that nothing is done to prepare for such events. 


The point is that organizational agility is a major capability for dealing with three of the four categories of risks: known unknowns; unknown knowns or unknown unknowns. 


source: UX Collective 


“Known knowns” are things we know that we know and understand. Presumably, risk is low as we understand something. 


“Known unknowns” are things we realize that we don’t know or understand. Or perhaps a better way of describing this category is that there are matters we know, but are unclear about potential risks. 


In either case--known knowns or known unknowns--people have some semblance of certainty as there are boundaries around risk.  


The other two categories involve higher levels of risk, as uncertainty is greater. 


“Unknown unknowns” arguably pose the greatest risk, as the existence of the risk factors is not understood, not seen, not believed to be risk factors. Perhaps the Covid-19 pandemic is an example of that. 


“Unknown unknowns” are future outcomes, events, circumstances, or consequences that we cannot predict. We also cannot plan for them. We don’t even know when and where to search for them.


source: Market Business News 


“Unknown knowns” are things that exist, influence lives and our approach to reality, but are not perceived to do so. Or we do not see their significance or we refuse to acknowledge dangers. 


The issue is where to categorize a black swan event. A  black swan is an extremely rare event with severe consequences. It cannot be predicted beforehand. Some might say a black swan is an unknown known. We know they happen, but we cannot predict them. 


That categorization is based on the assumption that we know black swans happen, so we understand that much. But we still cannot predict when one will happen, or where. 


Alistair Croll and Benjamin Yoskovitz used the Knowns and Unknowns framework in their book Lean Analytics to describe different ways of looking at data:


  • Known Knowns (facts): you use analytics data to check those facts against them.

  • Known Unknowns (hypotheses): can be confirmed or rejected with measurements.

  • Unknown Knowns (our intuitions and prejudices): can be put aside if we trust the data instead.

  • Unknown Unknowns (it can be anything!): are often left behind, but can be the source of great insight. By exploring the data in an open-minded way, we can recognise patterns and hidden behaviour that might point to opportunities.



source: Marvel 


This framework also is used by the Johari Window


Not "In the Top 10?" No Matter

It has been commonplace for decades to hear it said that the United States is not in the top 10 globally for internet access speed or some other metric. Over the last few decades, it has been argued that the United States similarly was not “in the top 10 globally” for use of text messaging or mobile phones, for example. 


Most recently, it had been argued that the U.S. was falling behind in 5G.  


It has been argued that the United States was behind, or falling behind, for use of smartphones, broadband coverage, fiber to home, broadband speed or broadband price, for example. Likewise, some argued that U.S. customers were “behind” Japan or South Korea on some metrics related to use of digital apps. 


And U.S. average mobile speeds were slow, historically, compared to other developed nations. 


So the “U.S. is behind” storyline is quite familiar. Of course, we might note that the same thing was said about U.S. fixed network telephone service. The U.S. installed base metrics rarely exceeded 12th to 15th globally. 


Some even have argued the United States was falling behind in spectrum auctions. That clearly also has proven wrong. What such observations often miss is a highly dynamic environment, where apparently lagging U.S. metrics quickly are closed.


To be sure, adoption rates have sometimes lagged other regions, early on. But there is a pattern here: early slowness is overcome; performance metrics eventually climb; availability, price and performance gaps are closed over time. 


The early storylines often are correct, as far as they go. That U.S. internet access is slow and expensive, or that internet service providers have not managed to make gigabit speeds available on a widespread basis, can be correct for a time. Those storylines rarely--if ever--hold up long term. U.S. gigabit coverage now is about 80 percent, for example. 


Other statements, such as the claim that U.S. internet access prices or mobile prices are high, are not made in context, or qualified and adjusted for currency, local prices and incomes or other relevant inputs, including the comparison methodology itself. 


Both U.S. fixed network internet prices and U.S. mobile costs have dropped since 2000, for example. 


What observers always forget is the huge amount of the U.S. land surface that is highly rural or unsettled. About 94 percent is unsettled or lightly populated, including mountains, rangeland, cropland and forests. 


In fact, most people live on just six percent of the U.S. land surface, according to the USDA. Also, the United States, like Canada, Australia, Russia and China, are continent-sized areas. Building networks takes longer when larger areas must be covered. 


All that has direct implications for the cost and speed of building networks. Dense urban networks cost the least, on a per-location basis, while rural networks cost the most. Also, incentives to build and operate networks are strongest on six percent of the land surface, and challenging on as much as 94 percent of the land surface. 


The point is that the United States rarely--if ever--ranks in the top 10 on any indicia of communications performance. In fact, it is more realistic to argue that U.S. will rank 19th to 20th on almost any measure of teledensity or communications supply. 


A corollary is that rankings do not matter. Nobody would allege that a “not in the top 10” ranking has any apparently negative impact on productivity, innovation or economic growth.  The claimed U.S. applications usage gap has not mattered for U.S. based application firms. 


There is always “some other place” where customers and users do more with a particular application or use case. It never seems to matter, ultimately. Teledensity and other measures of connectivity supply are inputs. What matters is output, the ability to create value from the use of such assets. 

Wednesday, September 8, 2021

Internet Access Got Dramatically Better--60% for Mobile, 32% for Fixed--Over the Last Year

Despite perennial complaints that internet access simply is not available enough, cheap enough or good enough, global internet access keeps getting faster, more available and arguably even more affordable. 


According to Ookla, mobile download speed improved 60 percent  over the last year globally, while fixed broadband speeds got 32 percent faster. 


The global mean of download speeds improved over the last 12 months on both mobile and fixed broadband to 55.07 Mbps (mobile) and 107.50 Mbps (fixed network) in July 2021, Ookla says.  


Mean (average) download speed over mobile was 99 percent faster in July 2021 than in July 2019, 141 percent faster when comparing July 2021 to July 2018, and 194 percent faster when comparing July 2021 to June 2017. ookla_global-index_world-speeds_0921-1

source: Ookla 


On fixed networks, mean download speed was 68 percent faster in July 2021 than in July 2019, 131 percent faster in July 2021 than in July 2018 and 196 percent faster in July 2021 than in June 2017.


On the price front, observers sometimes cite posted retail prices and argue that “ prices are too high.” That remains true in many developing countries, but in developed countries the story is not correct. Internet access is not very expensive


When some claim prices are too high, the typical argument is that U.S. a la carte prices (the retail tariff for internet access, not purchased in a bundle) are higher than prices in other countries.  


Adjusting for currency and living cost differentials, however, broadband access prices globally are remarkably uniform. 


The 2019 average price of a broadband internet access connection--globally--was $72..92, down $0.12 from 2017 levels, according to comparison site Cable. Other comparisons say the average global price for a fixed connection is $67 a month. 


Looking at 95 countries globally with internet access speeds of at least 60 Mbps, U.S. prices were $62.74 a month, with the highest price being $100.42 in the United Arab Emirates and the lowest price being $4.88 in the Ukraine. 


According to comparethemarket.com, the United States is not the most affordable of 50 countries analyzed. On the other hand, the United States ranks fifth among 50 for downstream speeds. 


Another study by Deutsche Bank, looking at cities in a number of countries, with a modest 8 Mbps rate, found  prices ranging between $50 to $52 a month. That still places prices for major U.S. cities such as New York, San Francisco and Boston at the top of the price range for cities studied, but do not seem to be adjusted for purchasing power parity, which attempts to adjust prices based on how much a particular unit of currency buys in each country. 


The other normalization technique used by the International Telecommunications Union is to attempt to normalize by comparing prices to gross national income per person. There are methodological issues when doing so, one can argue. Gross national income is not household income, and per-capita measures might not always be the best way to compare prices, income or other metrics. But at a high level, measuring prices as a percentage of income provides some relative measure of affordability. 


Looking at internet access prices using the PPP method, developed nation prices are around $35 to $40 a month. In absolute terms, developed nation prices are less than $30 a month. 


According to a new analysis by NetCredit, which shows U.S. consumers spending about 0.16 percent of income on internet access, “making it the most affordable broadband in North America,” says NetCredit.  


In Europe, a majority of consumers pay less than one percent of their average wages to get broadband access, NetCredit says. In Singapore, Hong Kong, New Zealand and Japan,  10 Mbps service costs between 0.15 percent and 0.28 percent of income. 


A normalization technique used by the International Telecommunications Union is to attempt to compare prices to gross national income per person, or to adjust posted retail prices using a purchasing power parity method. 


source: ITU 


Gross national income is not household income, and per-capita measures might not always be the best way to compare prices, income or other metrics. But at a high level, measuring prices as a percentage of income provides some relative measure of affordability. 


Looking at internet access prices using the purchasing power parity method, developed nation prices are around $35 to $40 a month. In absolute terms, developed nation prices are less than $30 a month.  


First of all, the product people buy is different over time. Customers are buying faster packages than they used to. To the extent that faster tiers of service cost more, “average” prices will climb. On a cost-per-Mbps basis, costs are dropping. 


But there are limits to price levels. Consumers will only spend so much on internet access. That figure tends to a small percent of household income, with all forms of communication service spending amounting to perhaps 


Prices for fixed network service have dropped about 92 percent over the last decade, for example, on a cost-per-megabit-per-second basis. Customers also use much more data than they used to, as well. Competition accounts for some of the improvement, even if observers sometimes argue “there is no competition” for consumer broadband services.  


The point is that internet access keeps getting better, and more affordable as well.


Was Collaboration Better During Enforced Work from Home? Maybe Not

It is common to hear technology business or policy leaders argue that remote work has not harmed productivity. Leaving aside the issue of whether remote work productivity changes can be measured, collaboration--deemed by most to be vital for knowledge workers--might have gotten far worse because of Covid. 


People like the freedom to work from home, no question.  


That might have happened despite reports that suggest information, knowledge and office workers now are spending more time with electronic forms of communication. But “communication” is not necessarily “collaboration.”


If collaboration is defined as “people working in teams or with others,” then collaboration seemingly has suffered. 


According to Gensler, “high-performing people at top companies tend to do individual work and collaborative work in equal measures—45 percent each, according to our research--with the remaining 10 percent made up of learning and social time.” 


For better or worse, those balances were changed during the period of enforced work from home policies. “While at home during the pandemic, people reported working in individual focus mode 62 percent of the time and 27 percent in collaboration, a disparity that negatively impacts company creativity and productivity,” Gensler argues. 


Before the pandemic, U.S. workers spent an average of 43 percent of their work weeks collaborating either virtually or in person. That number fell to 27 percent for workers who worked from home in 2020, for example. 


Gensler Workplace Survey Graphic 1

source: Gensler, Fortune 

 

So people might report--and likely actually are--spending more time on conference calls, sending or reading emails and messages. But they are collaborating--working with other people--less. 


As many firms explore “hybrid” work models, mixing in-office and at-home days during the week, it might prove hard to capture the fluid collaboration that used to happen, Gensler argues. A hybrid model might capture some of the “formal and structured” collaboration that happened pre-Covid. 


But it will be harder to capture the informal collaboration that is not scheduled and formalized. By design, most “focus” or “do it on your own” work will make more sense “at home.” Team work might logically make more sense “in the office.” 


The problem is that not all moments when collaboration can be helpful can be scheduled in advance. 


source: Gensler, Fortune 


Most reports on “productivity” rely on subjective reports--what people believe has been the case--rather than on more objective metrics. The reason is simply that there exist few “hard” or “quantitative” measures for productivity output. Mostly, people are forced to rely on measuring inputs. And, by definition, inputs are not “outcomes.” 


Gensler found that students are overwhelmingly of the opinion that enforced remote learning has been worse than in-person learning. Whether learning actually (objectively) has suffered is not clear. Students believe their learning has suffered. 



 source: Gensler 


Professional educators, on the other hand, tend to believe remote learning has worked rather well. Students do not agree. 


 source: Gensler 


One potential implication for business leaders is the effect of remote or even hybrid work models on the extent and quality of personal relationships that are considered important for just about any organization. Some 3,000 persons surveyed by Gensler--about 83 percent of whom were students--finds a perceived decline in every type of relationship, compared to pre-Covid and pre-remote learning modes. 


 source: Gensler 


As “soft” as most productivity “data” might be, the “quality of relationships” angles might be even more difficult to capture. And it is likely that future work modes will matter, as enforced remote work has mattered. As most full-time remote workers have traditionally felt they are at a disadvantage to co-workers “at the main office,” so some workers with more “remote” than “in person” experiences might also eventually see disadvantages. 


Most organizations can function for a short time under duress, without compromising long-term effectiveness. It arguably is a different matter if durexx continues for a long time. The reports we all hear about worker “burn out” or “Zoom fatigue” are of that sort. 


Up to this point, collaboration has been “synchronous,” bringing people together in real time. It is unclear whether “asynchronous” collaboration will work as well, or better, than synchronous modes. 


Over time, enforced isolation might start to have implications for organizational effectiveness, even if emergency, “short term” performance metrics do not seem impaired. 


Other business metrics, such as the ability to inculcate company culture, or create internal relationships important for young worker advancement, might also be suffering. It simply is nearly impossible to measure such processes. 


Most business leaders hope that hybrid work modes will provide the best of both worlds: happier employees and sustainable productivity. We simply do not know yet whether that is possible or likely. 


Always difficult to measure, we will have to make do with best guesses about long-term productivity from hybrid or remote work.  


Sunday, September 5, 2021

Loosely-Coupled Value Chains and "Becoming a Platform"

Loosely-coupled value chains create new business problems for firms used to operating in tightly-coupled value chains. The big business problem is the “permissionless” ability to participate in the value chain. 


App, content or marketplace suppliers do not “need the permission” of the access provider to conduct business. And that lessens the connectivity provider’s ability to construct a direct business relationship with any app layer supplier, as well as the app provider’s need for any such relationship. 


Even in the case of multi-access edge computing, where there arguably is a greater value to integrating 5G access functions with edge real estate, “convenience” or “time to market” or “cost” is more often the driver of collaboration than “necessity.” 


So one often hears advice for connectivity providers that they must move beyond the connectivity role. 


“In order to monetize 5G, operators need to move from a connectivity mindset focused on the underlying technology to providing a network as a platform that connects customers efficiently with their services (in the way they choose) by enabling multiparty B2B and B2B2X models,” argues  Sandra O’Boyle, Heavy Reading analyst.


It is reasonable advice. In fact, creating a platform or ecosystem is de rigueur thinking these days in most industries. 


But “platform business models” are very difficult to create in the communications services business, even when it remains true that connectivity is required for modern cloud-based computing.


Few words are as misused or misunderstood as “platform.” Only “digital transformation” comes close. A platform business model is based on an entity facilitating transactions between third parties, and making money by doing so. Older examples include eBay and Amazon, which do not “own” the products being bought and sold on their exchanges. 


Even used in the classic sense within the computing industry, operating as a platform means that third party apps can be built using the platform. To be sure, there is indirect value as the ecosystem of apps and peripherals compatible with the platform grows. But there often is no direct financial relationship between any of the third parties and the platform upon which they run. 


Newer examples of platform business models include Airbnb, which facilitates the renting of short-term lodging, without owning the rentable assets. 

 

source: Andreessen Horowitz 


A platform business model essentially involves becoming an exchange or marketplace, more than remaining a direct supplier of some essential input in the value chain. It is, in short, to function as a matchmaker. 


The platform facilitates selling and buying. The platform allows participants in the exchange to find each other. 


Platforms are built on resource orchestration; pipes are built on resource control. Value quite often comes from the contributions made by community members rather than ownership or control of scarce inputs vertically integrated by a supplier. 


To use an analogy, the whole business runs on electricity, but in few cases do we hear strategy advice to “partner” with electricity suppliers. 


It is true that network slicing creates bandwidth on demand and customized forms of bandwidth on demand that support potentially new revenue or value creation models for enterprise users of the network.


But that is not so much a case of “creating a platform business model” as it is creating additional value for connectivity services. 


O’Boyle says this adds “flexibility to support any service for any industry through any business model.” True, but not necessarily an instance of “changing” a connectivity business model. It is an instance of potentially increasing the value of communications, though. 


Perhaps the better way to characterize network slicing is less a shift away from a connectivity mindset and more an issue of whether connectivity providers can reposition at least some of their revenue opportunities in other parts of the value chain that do indeed benefit from network slicing. 


In other words, generating revenue as an app, marketplace or service provider--in addition to earning money as a connectivity services supplier--is the issue. Owning apps and operations that create value is the issue, rather than “becoming a platform.” 


Few connectivity service providers can genuinely hope to become a marketplace supporting transactions between users and suppliers of bandwidth and access, rather than making money selling such access directly to customers. 


In a strict sense, the former means the marketplace owner does not own the assets being bought and sold; the marketplace simply makes money when a transaction occurs. When the latter case--selling connectivity services to customers--remains the main source of revenue, a firm is not truly using a platform business model, no matter how sophisticated its products.


Saturday, September 4, 2021

Digital Transformation is Hard Because Business is Hard

There are more ways to fail at digital transformation than there are to succeed, providing an entity is capable of, and has set, actual performance indicators for itself; been successful at training its employees; changing its culture and adapting its business processes to match. 


Of course, that is a general rule for business in general. Consider the failure rate of startups. After a decade, more than 80 percent do not exist. After five years, half of small businesses have failed. 


For the biggest of firms, leadership or existence also is fleeting. Consider that in the 92 years of the Dow Jones Industrial Average, there have been some 100 firm changes. About 63 percent of Dow changes occurred in the second half of a 92-year sample period. 


To be sure, most of the firms removed from the Dow did not go out of business immediately. Many survive, but as less-robust versions of themselves, as their markets have shrunk. The point is that no firm is successful “forever.” Decline is the bookend to birth. 


Likewise, the typical firm listed on the Standard and Poors 500 index remains on the list less than 20 years.  


All of that is simply to note that transformation in business or life is hard, and prone to fail, just as business success is hard to create and sustain over long periods of time.


As applied to any enterprise or firm, “transformation” would normally be measured quantitatively by revenue indices, though cost, profit margin, customer types, purchase volumes or other similar metrics might also be involved. 


In other words, a successful digital transformation should often change the business model. 


source: Four Week MBA 


In the end, successful transformation “should” be measurable in terms of new revenue, new product, new service, new market segments served, new types of customers or new charging models. 


source: Business Models Inc. 


Basically, transformation would be measured by percentage of revenue earned outside the legacy core business. That is a tall order. 


To be fair, many also would consider a “transformation” successful if it allowed an entity to run its legacy business more profitably, or supported higher growth rates in the core business. The metrics for such a change would be measurable in terms of revenue growth, profitability improvements or related indicators such as stock price appreciation. 


Others would measure results by measures of customer experience improvement. Relevant key performance indicators could also focus, as noted above, on operational or financial metrics as well as customer experience improvements. 


Some of us would tend to discount “customer experience” metrics in favor of operational or financial metrics, though.  CX metrics often are hard to quantify and more subjective than revenue, profit margin, revenue or account growth, churn rates, new account gains, average revenue per account and similar financially or operationally-focused indices. 


Perhaps the best example is Netflix. It transitioned from renting tapes to renting DVDs, to delivering online content, to producing content. Amazon transitioned from selling books to becoming an e-commerce platform, and now finds its profitability driven by Amazon Web Services. 


Though the logical strategic moves might be movement into related roles in a value chain, some argue that is almost never sufficient to achieve big transformations. Instead, changing a business model often requires a shift of organizational mission. 


DIY and Licensed GenAI Patterns Will Continue

As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....