Wednesday, October 14, 2020

How Much Work from Home is Permanent? What is Productivity Impact?

Virtually everyone seems to believe that work patterns will be more varied, once the Covid-19 pandemic has passed. What is somewhat unclear is how much the patterns will change, on a permanent basis. Complicating matters, it is not clear how remote work affects productivity. 


Some 15 percent of executives surveyed by McKinsey believe at least one-tenth of their employees could work remotely two or more days a week going forward, almost double the eight percent of respondents who expressed that intention before COVID-19. 


That includes 20 percent of executives surveyed in the United Kingdom and Germany. But only only four percent of respondents in China thought that would be the case. Only seven percent of respondents believed at least one-tenth of their employees could work three or more days a week remotely, McKinsey reports. 


Also, potential for remote work is highly concentrated in a handful of sectors, such as information and technology, finance, insurance and management, McKinsey notes. Some 34 percent of respondents from the information and technology sector said they expect to have at least 10 percent of their employees working remotely for at least two days a week after COVID-19, compared with 22 percent of executives from that sector surveyed before the pandemic, for example. 


There is some debate about whether remote work is less productive or not. Nor is it easy to figure out what and how to measure. Output is the logical metric, but output is a fuzzy concept for most information workers. That leaves us with input measures, which may or may not be relevant to output. 


Still, the long-term trend towards more flexible work patterns is likely to get a tangible and mostly sustainable boost from pandemic work-from-home experiences. That said, there are substantial differences between casual work from home, telecommuting and routine work from home, either full-time or part-time. 

Subjective employee impressions about their own productivity when working from home have to be taken cautiously. People might believe they are “more productive” when working from home, but that does not mean they actually are, even if we can agree on how to measure office worker or knowledge worker productivity. 


So far, results seem inconclusive. Aternity reports that "workers are getting less productive the longer the remote work shift continues." 


GitLab reports that many workers feel more productive working from home. "Employees find themselves to be overall more productive (52 per cent) and efficient (48 per cent)," GitLab says.

Work on 6G Might Actually be Starting Later than for 4G and 5G

With 5G just launching commercially, it might seem odd that we already are hearing talk about 6G. But there is an argument to be made that, historically, the 6G process might actually have been started far later than did either 4G or 5G. 


Consider that the first 4G framework was set by the International Telecommunication Union in 1998. What became Long Term Evolution was proposed in 2004. and that early commercial deployment began about 2010. 


So 4G early conceptual work to commercialization took about 12 years, complicated by the distraction of two major alternatives, WiMax and LTE. 


For 5G, early conceptual work began about 2008. The standard was largely solidified by 2017. South Korea launched commercial 5G in 2019. The point is that the time from early conceptual work to commercial deployment took about 13 years. 


Samsung believes 6G could be available commercially as early as 2028, with widespread availability by 2020. So early commercialization could happen in about seven years, with deployment at scale in about nine years.


Many of us would expect to see early 6G deployment by about 2030. If so, then work on 6G actually is starting later than was the case for either 4G or 5G. 


So two outcomes might be suggested. On one hand, 6G might arrive later than we presently expect. On the other hand, if 6G arrives about when we expect (2030), then the development process from conceptual work to standards completion and commercial deployment will happen faster than was the case for 4G and 5G.


As one example, the NextG Alliance, formed by the Alliance for Telecommunications Industry Solutions (ATIS) aims to “advance North American global leadership over the 5G evolutionary path and 6G early development” and will hold its first meeting in November 2020.


The NextG Alliance says it hopes to:

  • Create a Next G development roadmap

  • Develop a set of national priorities that will influence government applied research funding and promote incentivized government actions.

  • Align development with commercialization outcomes.


Skeptics might argue it is way too early to talk about 6G. But the history of 4G and 5G suggests we might be starting later in the 6G process. If early conceptual work is just starting now, then the full development process--compared to 4G and 5G--would be compressed by three to four years.

In some ways the 6G development timeline might be easier. There were two different versions of 4G proposed and adopted commercially. That arguably slowed the development process.

5G did not suffer from that problem, but did introduce some new concerns about capital investment cost, as the addition of millimeter wave spectrum for the first time raised new issues about the number of required cell locations and the cost of "X" haul traffic from radio heads back to the core network.

6G likely will not have the confusion of two competing proposed standards or as much concern about X haul or small cells, as much of that infrastructure will have been put into place to support 5G. If so, then a more-compressed development cycle is feasible.

As 5G built on 4G, so 6G is likely to build on 5G, both in terms of infrastructure and other architectural choices. The inclusion of millimeter wave spectrum should ease issues associated with a possible move to terahertz frequencies. 

New antenna technologies to support millimeter wave signals, advanced duplex technologies (TDD), dense fiber X haul, spectrum sharing and use of artificial intelligence all should apply to 6G as well. 


Tuesday, October 13, 2020

When Advice to Move Up the Stack is Mistaken

Business strategy for larger tier-one service providers arguably differs from what is possible and prudent for smaller providers and specialists in the connectivity business. Arguably, in a business that increasingly is saturated and growing slowly (perhaps less than one percent per year, globally), revenue growth has to come from something other than legacy and traditional communications services.


source: GSMA 


As this chart suggests, tier-one service providers are betting on growth outside their legacy communications core, and many have made substantial progress. 


It has not been easy. Historically, it has been difficult for tier-one telecom providers to grow revenue in products and services outside core connectivity. 


That entails higher risk than traditionally might exist, but arguably is necessary as the traditional growth engines sputter. It always is difficult for any firm or industry to move away from its perceived core competency, but it arguably is easier for a firm or industry higher in the stack to move downwards than it is for any provider in the value chain to move upwards.


In other words, it should be “easier” for Google, Alibaba, Facebook, Microsoft or Apple to move “down the stack” than it is for China Telecom, NTT or AT&T to move up the stack. 


That said, it can be argued that some firms have been more successful than others, and perhaps the adage that “having too much money” is dangerous for any startup or big established provider is apt. It might be the case that Comcast and T-Mobile have been more successful with their acquisition strategies than others because they were relatively capital starved.


Some might argue those sorts of firms also benefit because they are less bureaucratic, and therefore more likely to make decisions less encumbered by internal political concerns, and to make those decisions faster. 


Specialist providers and smaller firms rarely have the human or financial capital to do much other than concentrate on core connectivity services, so the advice to “move up the stack”  towards applications is unwise. 


Similar advice to “move into adjacent areas of the value chain” likewise is unwise, and for the same reasons: small firms do not have the human or financial capital to do so, and could not achieve scale even if those other issues were not constraints. 


The point is that the oft heard advice to move up the stack or across the ecosystem is mostly applicable only to large tier-one firms. Smaller firms and small firms generally have to choice but to find a niche and stick to connectivity services.


Monday, October 12, 2020

Split Computing After Edge Computing?

These days, all networks are becoming computing networks. Also, computing and communications historically have been partial substitutes for each other. Architects can substitute local computing for remote, in other words. Mainframes, onboard, client-server, cloud and edge computing use different mixes of communications and computation resources. 


Edge computing, most agree, is among the hottest of computing ideas at the moment, and reduces use of communications capital by putting computing resources closer to edge devices. 


But technologists at Samsung believe more distribution of computing chores is possible. They use a new term “split computing” to describe some future state where computing chores are handled partly on a device and partly at some off-device site.


In some cases a sensor might compute partially using a phone. In other cases a device might augment its own internal computing with use of a cloud resource. And in other cases a device or sensor might invoke resources from an edge computing resource. 

source: Samsung


Conventional distributed computing is based on a client-server model, in which the implementation of each client and server is specific to a given developer, Samsung notes. 


To support devices and apps using split computing, an open source split computing platform or standard would be helpful, Samsung says.


With split computing, mobile devices can effectively achieve higher performance even as they extend their battery life, as devices offload heavy computation tasks to computation resources available in the network.


You might agree that the split computing concept is in line with emerging computing and communications fabrics that increasingly operate by using any available resource. Up to this point, that has been seen most vividly in device or app use of Wi-Fi. 


In the future we may see more instances of devices using any authorized and available frequency, network, tower or computing resource.


Sunday, October 11, 2020

Why Did Deregulation Happen in Telecom?

It is not so easy, even in hindsight, to explain why the world moved from viewing telecommunications as a natural monopoly to the current view that the industry is at least substantially amenable to competition, especially when using mobile and wireless platforms. Facilities-based competition remains a tougher challenge in the fixed networks area, though.


In 1996, U.S. policymakers ratified a huge shift in regulation by passing the Telecommunications Act of 1996, which deregulated local telephone and communications service. 


By doing so, lawmakers decided to test the theory that telecommunications really was not a natural monopoly, challenging more than a century of regulatory thinking and practice. In the subsequent years, regulators globally also moved to deregulate telecom. 


New technology, a desire for higher rates of innovation and higher consumer welfare were among the drivers of the change in thinking. 


It might seem obvious now, but the change from analog to digital signal processing and transmission was seen to have huge implications, such as making possible multi-purpose networks where the current state of the art was application-specific networks. 


But existing telecom law effectively prevented existing firms from exploring changes in their core businesses that technology was making possible. Local telephone service once was a monopoly, as was cable TV service. Radio and TV broadcasters were limited in each market to a few providers. 


Earlier, even attachment of personal equipment to the AT&T networks was forbidden. Users could not attach their own modems, for example, or their own phones. That was the case until 1968, when the Carterphone decision legalized attachment of personal customer premises equipment to the network.  


That change followed on smaller steps taken since the mid-1980s, when small changes in competitive regime were introduced at the edges of the business, allowing some competition to the monopoly provider AT&T in two-way radio services to some customers in a few markets. That then began a slow process of widening the amount of competition, including long-distance signal carriage. 


Electrical, gas and water systems, on the other hand, remain regulated as natural monopolies, with only a single legal supplier. As predicted, consumer benefits have been produced, in both mobile and fixed network realms. 


Between 2008 and 2019, for example, communications prices dropped, while prices for the other utility services increased. 


One example of the impact of competition in a capital-intensive industry can be seen by comparing retail price trends in the European, Japanese and South Korean electricity, natural gas and water industries--still regulated as natural monopolies--and the telecommunications business, which is deregulated and competitive. 

source: ETNO


The other trend we can note in those same markets is relatively inelastic demand for communications. In other words, people will only spend so much for communications. As a percentage of gross domestic product, for example, communications spending by households fell between 2006 and 2019, for example. 


source: ETNO


Basically, households tend to spend between 1.5 percent to 2.25 percent of GDP on communications. Other studies of household spending in developed markets show the same pattern. 


Over time, household spending on connectivity services has fallen. Nor has business spending moved much, either.  


Consumer spending does not change too much from year to year. Nor does the percentage of income spent on various categories change too much. 

source: IDC


In Myanmar, a new mobile market, spending per household might be as high as eight percent of total spending. In Australia, communications spending (devices and services) might be just 1.5 percent of household spending.  


In South Africa, households spend 3.4 percent of income is spent on communications (devices, software and connectivity). In Vietnam, communications spending is about 1.5 percent of total consumer spending.


In the United States, all communications spending (fixed and mobile, devices, software and connectivity, for all household residents) is perhaps 2.7 percent of total household spending. U.S. household spending on communications might be as low as one percent of household spending, for example.

Common Carrier Industry Prices Rise, Telecom Prices Fall

One example of the impact of competition in a capital-intensive industry can be seen by comparing retail price trends in the European, Japanese and South Korean electricity, natural gas and water industries--still regulated as natural monopolies--and the telecommunications business, which is deregulated and competitive. 


Between 2008 and 2019, for example, communications prices dropped, while prices for the other utility services increased. 

source: ETNO


The other trend we can note in those same markets is relatively inelastic demand for communications. In other words, people will only spend so much for communications. As a percentage of gross domestic product, for example, communications spending by households fell between 2006 and 2019, for example. 


source: ETNO


Basically, households tend to spend between 1.5 percent to 2.25 percent of GDP on communications. Other studies of household spending in developed markets show the same pattern. 


Over time, household spending on connectivity services has fallen. Nor has business spending moved much, either.  


Consumer spending does not change too much from year to year. Nor does the percentage of income spent on various categories change too much. 

source: IDC


In Myanmar, a new mobile market, spending per household might be as high as eight percent of total spending. In Australia, communications spending (devices and services) might be just 1.5 percent of household spending.  


In South Africa, households spend 3.4 percent of income is spent on communications (devices, software and connectivity). In Vietnam, communications spending is about 1.5 percent of total consumer spending.


In the United States, all communications spending (fixed and mobile, devices, software and connectivity, for all household residents) is perhaps 2.7 percent of total household spending. U.S. household spending on communications might be as low as one percent of household spending, for example.

Saturday, October 10, 2020

What is Cloud Native





A "cloud native" network is virtualized, fully based on apps using application program interfaces, self-healing, auto-scaling, using software as a service principles and resources. It is, in essence, no longer a nailed-up telecom network with an embedded operating system but a modern computing network that has fully separated the transport and physical layers from higher layers. 

Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...