Friday, July 9, 2021

Small Business Broadband Gaps?

What is true of U.S. consumers also is true of small businesses: some percentage of locations do not have access to internet access at the defined minimum speed of 25 Mbps.


According to two recent surveys by the National Federation of Independent Business and Google, around 8 percent or about 2-3 million U.S. small businesses lack access to broadband, says the Government Accountability Office.


According to FCC’s 2021 Broadband Deployment Report, as of year-end 2019, about 96 percent of the U.S. population had access to broadband at FCC’s established minimum speed benchmark of 25/3 Mbps. But that of course leaves four percent of locations that cannot buy service at the defined minimum. 


source: GAO 


As always, it is rural areas that are most underserved. At least 17 percent of rural Americans lack access to broadband at speeds of 25/3 Mbps, compared to only one percent of Americans in urban areas. 


Also, a significant percentage of consumers who can buy service do not do so. About 31 percent of people nationwide who have access to broadband at speeds of 25/3 Mbps have not subscribed to it, GAO says. 


A Google sponsored survey of small businesses with less than 250 employees found that eight percent of small businesses reported “poor internet access” as a barrier to improving digital engagement, GAO says. 


Assuming 32 million smaller businesses, this represents around two million to three million small business that potentially lack adequate access to broadband. The caveat, of course, is that some smaller businesses might not need broadband connectivity. 


A nationally representative survey of rural small businesses sponsored by Amazon and U.S. Chamber Technology Engagement Center (C-TEC), found that approximately 20 percent of rural small businesses were not using fixed network broadband.


That does not mean those businesses could not buy, only that there did not buy. About five percent were using the internet with a dial-up connection. That is not necessarily because they could not buy broadband, but possibly preferred dial-up. That could be the case for some retailers, for example, who use internet access only to process credit card and debit card charges. 


Also unclear are instances where mobile data access was used in place of fixed broadband. 


That noted, there likely are fewer businesses that do not require broadband access to run their operations.


Thursday, July 8, 2021

Back to Single-Product or Single-Purpose Networks?

One result of lower U.S. consumer interest in buying fixed network voice services or linear video subscriptions is a waning of the value of bundled service packages and an increase in single-product subscriptions for internet access on fixed networks. 


 

source: Parks Associates 


Oddly, that is something of a return to the application-specific networks of old. 50 years ago, all networks were application specific: TV and radio broadcast; cable TV networks; satellite networks, telco networks and mobile networks.


That has implications for facilities-based providers of fixed network services, as the financial upside from consumer fixed networks increasingly relies on broadband internet access, with dwindling contributions from voice or video subscriptions. 


That makes the payback model harder, as in its heyday service providers could hope to sell two or three services per account. So where a triple-play account could generate $200 per line, broadband-only accounts generate $40 to $100 per line. 


As difficult as the fiber-to-home decision was in the days of triple-play leadership, such decisions now must turn on other values, such as access networks supporting business customers, edge computing or cell site backhaul. 


That is especially true in markets where two or three suppliers operate. In Knoxville, Tenn., for example, Xfinity covers 91 percent of locations; AT&T covers 81 percent of locations and WoW covers an additional 36 percent of locations. All three internet service providers sell gigabit speed services. 


source: Broadband Now 


The big difference is that AT&T sells symmetrical service, an attribute that could well be the key to further gains, for AT&T as well as other telcos. 


Can Broadband Definitions be Changed in a Platform-Neutral Way?

In principle, broadband speeds will keep increasing almost indefinitely. A reasonable projection is that headline speed be in most countries by 2050 will be in the terabits per second range. 


Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access. Even if the “typical” consumer buys speeds an order of magnitude less than the headline speed, that still suggests the typical consumer--at a time when the fastest-possible speed is 100 Gbps to 1,000 Gbps--still will be buying service operating at speeds not less than 1 Gbps to 10 Gbps. 


Though typical internet access speeds in Europe and other regions at the moment are not yet routinely in the 300-Mbps range, gigabit per second speeds eventually will be the norm, globally, as crazy as that might seem, by perhaps 2050. 


The reason is simply that the historical growth of retail internet bandwidth suggests that will happen. Over any decade period, internet speeds have grown 57 times. Since 2050 is three decades off, headline speeds of tens to hundreds of terabits per second are easy to predict. 

source: FuturistSpeaker 


Some will argue that Nielsen’s Law cannot continue indefinitely, as most would agree Moore’s Law cannot continue unchanged, either. Even with some significant tapering of the rate of progress, the point is that headline speeds in the hundreds of gigabits per second still are feasible by 2050. And if the typical buyer still prefers services an order of magnitude less fast, that still indicates typical speeds of 10 Gbps 30 Gbps or so. 


Speeds of a gigabit per second might be the “economy” tier as early as 2030, when headline speed might be 100 Gbps and the typical consumer buys a 10-Gbps service. 


source: Nielsen Norman Group 


So there is logic to altering minimum definitions over time, as actual usage changes. In fact, typical speeds might be increasing faster than anticipated. 


Also, there is a difference between availability and actual customer demand. Fewer than 10 percent of potential customers who can buy gigabit service actually do so, at the moment, in the U.S. market. 


Perhaps 30 percent of customers buying service at 100 Mbps or higher believe it is more than they need. Perhaps 40 percent of customers buying gigabit services believe it is more than they need. 


The issue is that no definition can be technologically neutral, either for upstream or downstream speeds. Since 75 percent to 80 percent of U.S. customers already buy fixed network service at speeds of 100 Mbps to 1,000 Mbps, a minimum definition of “broadband” set at 100 Mbps is not unreasonable. 


The issue is that some platforms, including satellite, fixed wireless and digital subscriber line, will have a tough time meeting those minimums. 


Cable operators are virtually assured they can do so with no extra effort. 


Upstream bandwidth poses issues for most platforms other than modern fiber-to-home platforms, if the definition were set at 100 Mbps upstream, for example. 


It’s tricky, and almost impossible to reset broadband definitions in a platform-neutral way.


You Cannot Manage What You do not Measure

The adage that we cannot manage what we cannot measure is fair enough. So is the adage that "you get what you inspect, not what you expect."


Different eras in the connectivity business tend to bring distinct concepts and terms to the fore, and the changes in terminology reflect new business issues as much as efforts to measure progress in meaningful ways or enhance perceived firm value. 


Prior to the 1980s, a public company’s value might more often be stated in terms of equity valuation. Two decades later, “enterprise value” was more common--and useful--for startups and potential acquisition targets. Enterprise value is equity value plus debt, and provides a snapshot fo the price to acquire a firm. 


In the 1980s, “revenue per line” still  made good sense for consumer accounts. Most lines were producing revenue and revenue per account was still simple: voice only, and one line per house. Simply, consumer “revenue per line” and “revenue per account” were virtually identical. 


None of that makes sense in any market where serious facilities-based competition exists. 


Twenty years later, that metric was not useful. There was a significant difference between deployed assets and take rates: an active line passing a location might, or might not, have a paying customer. So “revenue per line” and “revenue per account” were no longer comparable. 


After U.S. cable companies started selling broadband access and voice, while telcos sold video entertainment and broadband, matters were even more complicated. A single house might buy voice, video entertainment or broadband, in any combination. 


Revenue contribution per product, and profit margin per product were distinct. Also, the question of how to define revenue contribution from any single location became complicated. So the concept of “revenue generating unit” arose. A house buying two services represented two RGUs. 


That concept replaced the concept of counting “subscribers.” RGU is a measure of units sold; a subscriber is an account. 


Similar issues arose with the concept of average revenue per user, in the mobile business especially, when the majority of accounts served multiple users on the same account. Average revenue per account and ARPU diverged. 


In the business segment, as competitive local exchange carriers emerged, upstarts wanted some way to describe sales in ways that generated larger numbers, since sales were often largely T-1 data lines. So the concept of the “voice grade equivalent” arose, representing the virtual number of voice grade circuits sold, at 64 kbps each. So a single T-1 line represented 24 VGUs. 


Undersea networks and other long-haul transport networks use dark fiber and lit fiber as metrics. 


Now Telstra uses a metric called “transacting minimum monthly commitment” (TMMC). It applies to postpaid mobile accounts.  TMMC is regarded as a lead indicator of ARPU trends, since actual ARPU might vary based on roaming revenue, handset revenue and out-of-bundle revenue. 


TMMC might be considered a measure of service contract value. 


The point is that new metrics reflect changes in the core business. Deploying lines or capacity is different from sales of capacity. Accounts and units sold are different. Revenue for products varies, as does profit margin. Contract value provides insight in a way that total revenues might not. 


Nearly all changes in terminology reflect changes in business models.


Wednesday, July 7, 2021

Small Firm Execs Say Work from Home Harmed Productivity

Big firms behave differently than small ones, in almost every respect. Big firms often can support and even benefit from regulations that add cost to running a business. Small firms have almost no ability to leverage regulatory overhead for business advantage. 


Large businesses can survive a significant number of underperforming employees. Few small businesses can do so. 


Some large businesses believe work at home for most employees will not harm productivity. But 45 percent of U.S. small business employers say employee productivity decreased while working from home, according to a survey conducted by Digital. 


source: Digital 


B2B Sales Journeys Now Begin Online

No matter how long a business-to-business sales process takes, perhaps 60 percent to  70 percent of potential business product buyers have conducted research on solutions to their problems before they contact a sales professional.  


In other words, “B2B customers today progress more than 70 percent of the way through the decision-making process before ever engaging a sales representative,” according to some studies.


The B2B sales journey in 2020 began online, using search about 90 percent of the time, according to Forrester Research. Search is involved 90 percent of the time, according to Google.  


In 2020, 80 percent of the average B2B buyer journey took place online, and whatever statistics you choose to believe, online research is a huge part of the buyer journey.  


Though intensified by the pandemic, 96 percent of B2B sales teams have shifted to remote selling, according to McKinsey.


That includes checking a supplier’s website, doing online searches and reading user reviews, according to Marketing Charts. Some 94 percent of all potential business customers conduct online research before making any purchases valued at more than $100,000, for example.  


source: Marketing Charts 


Accenture notes that business buyers also increasingly expect a “consumer online” experience when buying B2B. In 2014, for example, about 40 percent of prospects conduct online research for a majority of goods priced under $10,000, and 31 percent do so at that frequency for goods costing at least $100,000.  


More importantly, perhaps, many B2B transactions take place online as well. 


source: Marketing Charts 


That remained true in 2020. More than half of prospects reported found solutions online. And 70 percent of buyers defined their solutions before they ever contacted a sales professional. 


source: Demand Base 


Work from Home Productivity Less Clear than Many Believe

Work-from-home productivity often is thought to be as good, if not better, than in-office productivity, at least in the short term, and at least for knowledge workers. It is less clear whether productivity for many types of government services has improved. 


source: McKinsey 


It likely is much more complicated and nuanced than that. Business owners, for example, do not share the belief, though employees like it. But such attitudes are not the same as productivity gains. 


And there is some evidence that work from home productivity actually is lower than in-office work.  


According to Microsoft, “when we compare collaboration trends in Microsoft 365 between February 2020 and February 2021:

  • Time spent in Microsoft Teams meetings has more than doubled (2.5X) globally and, aside from a holiday dip in December, continues to climb.

  • The average meeting is 10 minutes longer, increasing from 35 to 45 minutes.

  • The average Teams user is sending 45 percent more chats per week and 42 percent more chats per person after hours, with chats per week still on the rise.

  • The number of emails delivered to commercial and education customers in February, when compared to the same month last year, is up by 40.6 billion.¹

  • And we’ve seen a 66 percent increase in the number of people working on documents.”


So people spend much more time in meetings than they did when in the office; more time with email and chat; more time working on documents. 


In principle--and assuming one can measure it--productivity increases as output is boosted using the same or fewer inputs. An initiative in Iceland, which has notable low productivity, suggests that service productivity by units of government does not suffer when working hours are reduced. 


To be sure, service sector productivity is devilishly hard to measure, if it can be measured at all. It is hard to measure intangibles. And there is some evidence that satisfaction with public sector services is lower than private services, and substantially lower for many types of government services. 


Productivity is measured in terms of producer efficiency or effectiveness, not buyer or user perception of value. But it is hard to argue that the low perceived quality of government services is unrelated to “productivity.” 


source: McKinsey 


And what can be measured might not be very significant. Non-manufacturing productivity, for example, can be quite low, in comparison to manufacturing levels. 


And there are substantial differences between “services” delivered by private firms--such as airline travel or communications-- and those delivered by government, such as education, or government itself. 

 

The study argues that reductions in work hours per week of up to 12.5 percent had no negative impact on productivity. Methodology always matters, though. 


The studies relied on group interviews--and therefore user self reports--as well as some quantitative inputs such as use of overtime. There is some evidence of how productivity (output) remained the same as hours worked were reduced. 


For public service agencies, shorter working time “maintained or increased productivity and service provision,” the report argues. 


There is perhaps ambiguous quantitative evidence in the report of what was measured or how it was measured. The report says “shifts started slightly later and/or ended earlier.” To the extent that productivity (output) in any services context is affected directly by availability, the key would be the ability to maintain public-facing availability. The report suggests this happened. 


But the report says “offices with regular opening hours closed earlier.” Some might question whether this represents the “same” productivity. Likewise, “in a police station, hours for investigative officers were shortened every other week.” Again, these arguably are input measures, not output measures. 


So long as the defined output levels were maintained, the argument can be made that productivity did not drop, or might formally have increased (same output, fewer inputs). In principle, at least over the short term, it should be possible to maintain public-facing output while reducing working hours. Whether that is sustainable long term might be a different question. 


The report says organizations shortened meetings, cut out unnecessary tasks, and reorganized shift arrangements to maintain expected service levels. Some measures studied were the number of open cases, the percentage of answered calls or the number of invoices entered into the accounting system. 


In other cases the test seemed to have no impact on matters such as traffic tickets issued, marriage and birth licenses processed, call waiting times or cases prosecuted, for example. Some will say that is precisely the point: instances did not change as hours were reduced. 


Virtually all the qualitative reports are about employee benefits such as better work-life balance, though, not output metrics.

Zoom Wants to Become a "Digital Twin Equipped With Your Institutional Knowledge"

Perplexity and OpenAI hope to use artificial intelligence to challenge Google for search leadership. So Zoom says it will use AI to challen...