Friday, August 7, 2020

Does Facebook Shift on Regulation Also Lead to Change of Free Speech Rights?

For those of you not communications “regulatory geeks,” this headline from Facebook might not seem especially unusual: “Four Ideas to Regulate the Internet,” published under Mark Zuckerberg’s name. 

“I believe we need a more active role for governments and regulators,” Zuckerberg says. “By updating the rules for the internet, we can preserve what’s best about it--the freedom for people to express themselves and for entrepreneurs to build new things--while also protecting society from broader harms.”


For some of us who need to understand the role of regulation in shaping any industry’s profit potential, this position  is important.


For others who watch the development of U.S. regulatory models related to “freedom of speech,” it is perhaps shocking. We can debate the role of content platforms in shaping news and content with enormous political implications. But almost nobody would argue with the notion that Facebook, Google, Twitter and others now have more power or influence than traditional media.


And while most might agree that government censorship is not a good thing, platforms are private firms not traditionally covered by the First Amendment. Yet some might argue the danger of free speech suppression is not a political danger primarily or exclusively from “the federal government” but also “from the platforms.”


It is complicated, to be sure. Democracy is a means, not an end. Tyrannical behavior can be freely exercised by citizens using democratic means. Companies and mobs can restrict freedom of speech just as much as the federal government. 


Still, there is an important possible new shift here. Facebook has the right of free speech, “as a speaker.” But Zuckerberg also now says it is willing to live with regulations of various sorts that somewhat extend the right of free speech to “listeners, viewers and readers.”


Those two ways of looking at “who” has the right of free speech has changed over the centuries. Originally, the right was held only by speakers. In the era of electronic communications, a different attribution has happened.


The “right” was deemed to be possessed by “listeners or viewers.” That is the logic behind “equal time rules” for political speech on TV or radio broadcasts, for example. Over the last couple of decades, such rules have been peeled away, returning to the original sense of rights belonging to speakers.


What Facebook now proposes, at least in principle, is that some amount of rights now shift back to protecting the political speech rights of listeners, viewers and readers. 


It is possibly the sign of an important key philosophical shift. Almost all arguments about “fairness” for political speech on platforms are based on the idea that it is the audience which must be protected, not the platform as “speaker.”


It is possible we could see the beginnings of a long-term shift back to the concept that it is the listener, viewer or reader whose political rights are to be protected, a notion that arose only with the advent of electronic content. 


First, let us be clear, the U.S. constitution bars censorship or fettering of clearly political speech by the federal government. 


What has never been clear is whether regulation can be, or ought to be, applied to private actors in the economy, especially giant platform companies in the content business. In the past, that has been reason enough for the federal government to impose some restrictions on private actor content freedom.


In the past, the justification has been “use of public spectrum.” Something like that was applied to cable TV companies, which were held to have public interest obligations because they used public rights of way.


Irrespective of the logic and soundness of such reasoning, the matter of ”who has the free speech right” was changed. 


“Lawmakers often tell me we have too much power over speech, and frankly I agree,” says Zuckerberg. Right now the issues are privacy, harmful content, election integrity and data portability. Those are, some might argue, relatively peripheral to the matter of protecting free political speech. 


The biggest potential shift, though, is the longer term balancing of the rights of speakers and audiences. In principle, the First Amendment to the U.S. constitution protects citizens from suppression of free speech only by the government. 


When TV and radio broadcasting and cable TV developed, some amount of shift occurred. The rights were partially seen as being held by viewers and listeners. With the advent of huge and dominant content platforms, that might expand to include readers. 


Such changes take time, but have happened before. And that is why Facebook’s position matters. Perhaps it is the first of many changes that could affect and change platform roles in protecting free speech. 


Virtually all moves in the direction of platform regulation would be based on the rights of audiences, not speakers. It has happened before. And it always is quite tricky.


Thursday, August 6, 2020

Advanced Technology Takes Longer Than You Think to Become Mainstream

Advanced technology often does not get adopted as rapidly as the hype would have you believe. In fact, most useful advanced technologies tend not to go mainstream until adoption reaches about 10 percent. That is where the inflection point tends to occur. That essentially represents adoption by innovators and early adopters. 

source: LikeFolio


One often sees charts that suggest popular and important technology innovations are adopted quite quickly. That is almost always an exaggeration. The issue is where to start the clock running: at the point of invention or at the point of commercial introduction? Starting from invention, adoption takes quite some time to reach 10 percent adoption, even if it later seems as though it happened faster. 

source: Researchgate


Consider mobile phone use. On a global basis, it took more than 20 years for usage to reach close to 10 percent of people. 

source: Quora


That is worth keeping in mind when thinking about, or trying to predict, advanced technology adoption. It usually takes longer than one believes for any important and useful innovation to reach 10-percent adoption


source: MIT Technology Review


That is why some might argue 5G will hit an inflection point when about 10 percent of customers in any market have adopted it.

Wednesday, August 5, 2020

U.S. Business Advanced Technology Adoption Still Very Low

No matter how sexy industry observers might find advanced information technology to be, most businesses, and most business managers and owners, rarely report, at least at the moment, actually using advanced technologies, with the exception of personnel at very-large firms, a study sponsored by the U.S. Census Bureau finds. 


“We find that adoption of advanced technologies is relatively low and skewed, with heavy concentration among older and larger firms,” the study finds.


At least one reason for muted current adoption seems to be that applying advanced technology requires significant investments in other technologies and the ability to change business processes to take advantage of those technologies. “


We also find that technology adoption displays features of a hierarchical pattern, with stages of technology adoption of increased sophistication that appear to build on one another,” study authors say. In other words, most advanced technology is not “rip and replace.” To take advantage of new technologies, lots of other things must also change. 


In fact, the percentage of firm respondents--from a sample of about 850,000 firms--suggests adoption of most advanced technologies, ranging from touchscreens to machine learning; voice recognition to machine vision; natural language processing to automated vehicles, is quite low, mostly in the low single digits. 

source: U.S. Census Bureau, Wired


Tuesday, August 4, 2020

Maybe Covid-19 Will Wind up Changing Very Little

It seems to be conventional wisdom that the Covid-19 pandemic “changes everything.” Consider other recent major disruptions, ranging from the internet bubble and crash of 2001 of the global Great Recession of 2008. The SARS epidemic, for example, crashed tourism in several Asian countries in 2003. Before the end of the year, tourism levels were back to pre-epidemic levels. 


source: Deloitte


Likewise, consumer spending in several Asian countries fell during the SARS epidemic of 2003, but again recovered to pre-epidemic levels quickly. The point is that disruptive events have proven not to change behavior all that much, once the issue has passed. 


source: Deloitte


The point is that people, businesses and industries have shown high ability to bounce back from severe disruptions, and perhaps much more quickly than you might expect.


Frictionless Business is Partly about Productivity

Business friction is anything that prevents a potential customer from buying your product or service. In a broad sense, friction applies to every part of a business: strategy, product development, technology, distribution channels, marketing, customer service, governance, human resources, capital resources, information technology, customer segmentation and supply chains. 


The immediate thought is that frictionless business involves only “efficiency,” with the least resource input for any given level of output. Frictionless business actually also applies to “effectiveness,” the ability of a business or organization to achieve results that matter. 


According to the U.S. Bureau of Labor Statistics, for example, the productivity (efficiency) of industries including fixed networks, computers and peripherals, communications equipment, semiconductors and non-farm businesses actually saw increased friction (lower productivity) between 2000 and 2017, compared to the 1997 to 2000 period.


source: U.S. Bureau of Labor Statistics


Only the mobile service provider business saw higher productivity over the same time frames. Keep in mind that “productivity” is a combination of output and input--goods and services volume produced compared to the hours required to create those products--along with end user demand. 


Friction can result from any combination of changes in either supply or demand. In the case of fixed network services, much of the fall in productivity comes from reduced demand for the products, and hence lower sales volumes. Even as inputs have been trimmed over time, the lower capital investment or operating costs have not fallen equally fast. 


There is a greater amount of stranded assets, for example, as the percentage of homes or businesses buying legacy services drops. That means the overhead cost of the network has to be borne by fewer paying customers. 


Adoption (percentage of potential customers who actually do buy) also matters. These days, “everybody” uses a mobile service. Less than half of households buy even a single fixed network service from any supplier. 


Frictionless business is the sum total of all actions any business can take to overcome friction, creating and keeping customers, increasing the volume of products sold to those customers with acceptable profit margins, maintaining or increasing market share with superior return on investment. 


Frictionless business reduces every barrier to business success, allowing firms to operate more effectively--doing the right things--as well as efficiently, with minimal waste and maximum productivity. 


Companies that operate with less friction are able to achieve superior results with less resource intensity. To the extent that cloud computing represents a more effective way to deliver internet-based apps and services, as well as providing cost savings and flexibility, it represents a move in the direction of frictionless business. 


To the extent that hyperscale and other data centers are required to support cloud-based apps, and to the extent that cloud apps represent higher value for customers and users, higher revenues and profits for suppliers, featuring new products available at lower costs and with different business models, data centers represent a move in the direction of frictionless business. 


source: Wall Street Journal, Synergy Research


Friction matters for employees and workers as much as it does for companies. One sometimes hears it said that income inequality or wealth inequality is the result of “greedy” people. But worker compensation is directly related to productivity, itself an indicator of friction.


Where friction is least, compensation is highest; where friction is greatest, compensation arguably is lowest. In food services and accommodation, for example, compensation change is directly--one for one--related to changes in productivity. Mining has negative productivity. In the short run, compensation outstrips gains in produced goods. 


Information technology nearly always has the highest improvements in productivity, with comparably lower changes in compensation. That is partly because production is “asset light.” Digital goods are easy to create and reproduce, compared to physical goods. 


Higher usage (demand) is not related in a linear way to the costs of producing the next incremental units. 

source: Bureau of Labor Statistics


Sunday, August 2, 2020

Pareto Principle and Telecom Revenues and Profit

The Pareto Principle, often colloquially known as the 80/20 rule, explains many phenomena in nature, science and business, including the connectivity business. 


In the United Kingdom, for example, 70 percent of people live in areas using 20 percent of cell sites. Ericsson estimates that 20 percent of cell sites carry 70 percent of 3G traffic. We also should expect deployment of about 80 percent of small cells in hyper-dense or very-dense areas. 


CenturyLink earns 75 percent of its revenue from enterprise services. 80 percent of telco profits come from 20 percent of the products or customers. AT&T has earned the bulk of its actual profits from business services. 


Typically,  80 percent of any company’s profit is generated by 20 percent of its customers; 80 percent of complaints come from 20 percent of customers; 80 percent of profits come from 20 percent of the company’s effort; 80 percent of sales come from 20 percent of products or services; 80 percent of sales are made by 20 percent of sellers and 80 percent of clients come from 20 percent of marketing activities.


There are many other common Pareto examples:


  • 80 percent of car accidents are caused by 20 percent of young people

  • 80 percent of lottery tickets are bought by 20 percent of society

  • 80 percent of air pollution is caused by 20 percent of the population

  • 80 percent of all firearms are used by 20 percent of the population

  • 80 percent of all Internet traffic belongs to 20 percent of websites

  • 80 percent of car crashes happen within the first 20 percent of the distance covered

  • 80 percent of mobile phone calls come from 20 percent of the population

  • 80 percent of the time people use 20 percent of the tools at their disposal


It is estimated that 20 percent of Covid-19 cases were responsible for 80 percent of local transmission.  Some 80 percent of users will only use 20 percent of any piece of software's features. Microsoft also observed that 20 percent of software bugs will cause 80 percent of system errors and crashes.


It is estimated that the top 20 percent of players are responsible for 80 percent of a basketball team’s success. 


The issue is how to apply Pareto in the connectivity business, as telecom revenue growth rates are quite low, cash flow is shrinking, returns on invested capital are dropping and consequently equity valuations are under pressure. 


The now-obvious observation is that connectivity provider revenue growth is a fraction of economic growth. To change that situation, something other than “keep doing what you have been doing” will not produce different results. 

source: IDATE


Freedom to maneuver often hinges on the regulatory regime. Tier-one service providers with an obligation to “serve everyone” cannot make the same choices as non-regulated or lightly-regulated firms able to choose their geographies, customers and products. 


Carriers of last resort cannot simply choose not to serve consumer customer segments, or focus only on urban areas. Specialist providers can do so. 


Tier-one service providers have learned to rely on mobility services, though. 


In Western Europe, perhaps 80 percent of revenue growth is driven by mobile services, though mobility revenues overall are about 46 percent of total revenues. 

source: A.D. Little


That is even more true in other regions, where mobility revenue is as much as 82 percent of all connectivity provider revenues, and where mobile infrastructure accounts for most of the new facilities-based competition between service providers. 


source: IDATE


The traditional difference in profit margins between enterprise and consumer accounts also explains why many believe 5G profits will disproportionately be created by enterprise 5G services, not consumer 5G. 


Friday, July 31, 2020

As 5G Focuses on Enterprise Use Cases, 6G Might Focus on Virtualized and Self-Learning Networks

Mobile and fixed network operators constantly are challenged to reduce capital investment and operating costs as a way of compensating for low revenue growth, challenged profit margins and ever-increasing bandwidth consumption by customers whose propensity to pay is sharply limited. 

The very design of future 6G networks might work to help reduce capex and opex, while incorporating much more spectrum, at very high frequencies and basing core operations on use of machine learning (a form of artificial intelligence that allows machines to learn autonomously). 

New 6G networks might rely even more extensively on virtualization than do 5G networks, featuring now-exotic ways of supporting internet of things sensors that require no batteries, a capability that would dramatically reduce IoT network operating costs. 

It is possible 6G networks will be fundamentally different from 5G in ways beyond use of spectrum, faster speeds and even lower latency. 6G networks might essentially be “cell-less,” able to harness ambient energy for devices that require no batteries and feature a virtualized radio access network. 


The “cell-less” architecture will allow end user devices to connect automatically to any available radio, on any authorized network. Harvesting of ambient energy will be especially important for internet of things devices and sensors that might not require any batteries at all to operate, reducing operating cost. 


source: IEEE


The virtualized radio access network will provide better connectivity, at possibly lower cost, as user devices can use the “best” resource presently available, on any participating network, including non-terrestrial platforms (balloons, unmanned aerial vehicles or satellites). 


Backhaul might be built into every terrestrial radio, using millimeter wave spectrum both for user-facing and backhaul connections, automatically configured. That will reduce cost of network design, planning and backhaul. 


Researchers now also say such federated networks will be based on machine learning (artificial intelligence), which will be fundamental to the way 6G networks operate. Devices will not only use AI to select a particular radio connection, but will modify behavior based on experience. 


The network architecture might be quite different from today’s “cellular” plan, in that access is “fully user centric,” allowing terminals to make autonomous network decisions about how to connect to any authorized and compatible network, without supervision from centralized controllers.


Though machine learning arguably already is used in some ways to classify and predict, in the 6G era devices might also use artificial intelligence to choose “the best” network connection “right now,” using any available resource, in an autonomous way, not dictated by centralized controllers.  


To be sure, in some ways those changes are simply extrapolations from today’s network, which increasingly is heterogeneous, able to use spectrum sharing or Wi-Fi access, using radio signal strength to determine which transmitter to connect with. 


Architecturally, the idea is that any user device connects to the radio access network, not to any specific radio, using any specific base station, say researchers Marco Giordani, Member, IEEE, Michele Polese, Member, IEEE, Marco Mezzavilla, Senior Member, IEEE, Sundeep Rangan, Fellow, IEEE, Michele Zorzi, Fellow, IEEE. 

source: IEEE


Overall, many 6G features will be designed to reduce the cost and improve the efficiency of the radio access network, especially to create “pervasive” connectivity, not just to add more bandwidth and lower latency for end users and devices.


Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...