Friday, October 30, 2020

"Digital Transformation" Will be as Hard as Earlier Efforts at Change

New BCG research suggests that 70 percent of digital transformations fall short of their objectives. 


That would not surprise any of you familiar with the general success rate of major enterprise technology projects. From 2003 to 2012, only 6.4 percent of federal IT projects with $10 million or more in labor costs were successful, according to a study by Standish, noted by Brookings.

source: BCG 


IT project success rates range between 28 percent and 30 percent, Standish also notes. The World Bank has estimated that large-scale information and communication projects (each worth over U.S. $6 million) fail or partially fail at a rate of 71 percent. 


McKinsey says that big IT projects also often run over budget. Roughly half of all large IT projects—defined as those with initial price tags exceeding $15 million—run over budget. On average, large IT projects run 45 percent over budget and seven percent over time, while delivering 56 percent less value than predicted, McKinsey says. 


Significantly, 17 percent of IT projects go so bad that they can threaten the very existence of the company, according to McKinsey. 


The same sort of challenge exists whenever telecom firms try to move into adjacent roles within the internet or computing ecosystems. As with any proposed change, the odds of success drop as the number of successful approvals or activities increases.


The rule of thumb is that 70 percent of organizational change programs fail, in part or completely. 


There is a reason for that experience. Assume you propose some change that requires just two approvals to proceed, with the odds of approval at 50 percent for each step. The odds of getting “yes” decisions in a two-step process are about 25 percent (.5x.5=.25). In other words, if only two approvals are required to make any change, and the odds of success are 50-50 for each stage, the odds of success are one in four. 


source: John Troller 


The odds of success get longer for any change process that actually requires multiple approvals. Assume there are five sets of approvals. Assume your odds of success are high--about 66 percent--at each stage. In that case, your odds of success are about one in eight for any change that requires five key approvals (.66x.66x.66x.66x.66=82/243). 


The same sorts of issues occur when any telecom firm tries to move out of its core function within the ecosystem and tries to compete in an adjacent area. 


Consultants at Bain and Company argue that the odds of success are perhaps 35 percent when moving to an immediate adjacency, but drop to about 15 percent when two steps from the present position are required and to perhaps eight percent when a move of three steps is required.

source: Bain and Company


The common thread here is that any big organizational change, whether an IT project or a move into new roles within the ecosystem, is quite risky, even if necessary. The odds of success are low, for any complex change, no matter how vital.


Why 4G Sometimes is Faster than 5G

As always, the amount of spectrum available to any mobile service provider correlates with potential data throughput. As AT&T, for example, has rolled out 5G service, it has relied on low-band assets initially.


And no amount of fancy signal processing is going to compensate for the amount of spectrum available to support 5G, compared to 4G, for example. If you look at the total amount of spectrum available to support AT&T’s 5G coverage, you can see that 4G spectrum is more capacious than 5G. 


source: PCmag 


That means AT&T’s 5G network--for the moment--offers less speed than the 4G network. That will change over time, and likely quite substantially. 


Over the last decade, average (or perhaps typical) mobile data speeds have grown logarithmically, according to data compiled by PCmag. I cannot tell you whether the graph shows median or mean speeds, but the point is that, assuming the same methodology is used for all data, the logarithmic trend would still hold. 

 

source: PCmag 


There is no reason to believe 5G will fail--over time--to continue the logarithmic trend, with the release of huge amounts of new spectrum, expanded use of spectrum sharing and spectrum re-use, plus small cell access.


Wednesday, October 28, 2020

Need for Global Scale Will Limit Telco IoT, Edge Computing Success

Among other reasons, lack of global scale is likely to prevent most telcos or mobile operators from becoming leading providers of internet of things or edge computing solutions or platforms. Generally, scale economics work against most telcos, no matter how large. 


That is not to say large telcos cannot significantly diversify revenue streams. AT&T has managed to shift its revenue sources enough that perhaps 43 percent of total revenue comes from something other than connectivity services. Softbank (at least until recently) had managed to generate perhaps 33 percent of total revenue from non-connectivity sources, while KT had reached about the same level. 


source: GSMA 


Many other tier-one telcos have managed to add between 10 percent and 25 percent of total revenue from sources other than connectivity. The need for scale seems to apply for those operations as much as it matters for the core connectivity business. But there are issues beyond scale. 


To be sure, new services such as the internet of things and edge computing will make some contribution to service provider revenues. Still, most of the value and revenue from IoT will be created elsewhere in the value chain (semiconductors, devices, platforms, integration, application software), not in connectivity. 


Perhaps edge computing will show the same dynamics, as edge computing still is about computing. That means the leading suppliers of computing--especially cloud computing--have a reasonable chance of emerging as the leading suppliers of workload as a service at the edge. 


Simply, if it is logical to purchase compute cycles from a major cloud or premises computing supplier, it will likely make just as much sense to purchase edge compute the same way. 


In other words, customers tend to have clear preferences about the logical suppliers of various products, beyond scale. The phrase “best of breed” captures the thinking. If an enterprise or other entity is looking at premises computing, it looks to certain brands. If a company is looking for cloud computing, it looks to other brands. 


Almost never is a telco among the logical five potential choices for buying compute cycles or computing platforms. 


That noted, tier-one telcos have made important strides diversifying beyond core connectivity. Among the issues are the extent to which that can happen in the edge computing or IoT realms.


BT to Build Private 5G Network for Belfast Harbor

BT says it is building and will operate a private 5G network on behalf of Belfast Harbor, covering large parts of the 2,000-acre site in 2021. BT says it aims to build “a state-of-the-art 5G ecosystem within the Port.”


Aside from supporting mobile phone service, the private network will enable remote controlled inspection technology (presumably use of drones), reducing the need for workers to climb towers. The network also will support air quality sensors. 


One can guess from those two examples--and BT’s talk of developing an ecosystem, that most of the expected smart harbor applications have not yet been deployed or developed, or perhaps have not yet been adapted to work on the 5G private network. 


Joe O’Neill, Belfast Harbor chief executive says the network is intended to support accurate tracking and integration of data gathered from multiple sources, and expects the new network to help it capture, process and interpret data in real time.


Tuesday, October 27, 2020

It's Hard to Win a Zero-Sum Game

Zero-sum games are hard to win, in part because every winner is balanced by a loser. Many mature mobile communications markets are largely zero-sum games these days. Market share, by definition, means one supplier gains exactly what another supplier loses. 


That is not the case for new, emerging or growing markets, where virtually all contestants can, in theory, gain while nobody loses. 


The substitution of machines for human labor is something of a zero-sum game as well.
The notion of tradeoffs is key for zero-sum markets. Consider minimum wage laws or unionization of employees. The issue is not whether those things are good or bad, but simply the tradeoffs that are made. 


Higher minimum wage laws. produce higher wages for a smaller number of employees, in part because higher wage minimums increase the attractiveness of substituting machines for human labor. 


Higher union membership and bargaining power tends to produce higher wages for union members, but often at the cost of the number of people who are employed at unionized businesses. 


The other trend we see is that when forced to make a choice, unions tend to prefer saving a smaller number of jobs in return for gaining higher wages. Workers with less seniority normally are sacrificed in such deals. 


We can disagree about whether Uber and Lyft drivers are independent contractors or employees. But it is not hard to argue that if employee classification leads to higher minimum wages, it also will lead to fewer Uber and Lyft drivers able to work. 


We can make any choices we want about which outcome we prefer: more work for more people or higher wages for fewer workers. But the choices will inevitably be made. It’s a zero-sum game.


As more and more telecom markets reach saturation, zero-sum outcomes will appear in market share statistics or the number of 4G phone account subscribers versus 5G subscribers.


Mobile operators can bend the curves a bit by changing value propositions, adding new features and bundling devices and features (up to a point) to encourage customers to switch to more-expensive plans, when they come up with compelling offers. But all of that occurs within a business that is largely a zero-sum game in many markets.


"When I Use a Word, it Means just What I Choose it to Mean"

Telecom terminology changes from time to time. These days, a “core network” for a private 4G or 5G network requires software we formerly associated with a mobile network core, such as base station control functions, routing, synchronization, timing and so forth.

These days “voice” often refers to the interface people use to interact with their phones, smart speakers or car communication systems, rather than the older notion of voice phone calls. 

Broadband used to be defined as any data rate of 1.544 Mbps or higher. These days it is some higher number that we adjust periodically. 

“Mobility” used to refer to use of mobile phones and cellular networks. These days it often refers to ride sharing. 

“Over the top” has been used in the past to describe video entertainment, messaging or voice applications provided by third parties and accessed by users and customers over any internet connection. Today it might more properly describe any service or application accessed over a communications network that is not owned by the supplier of access services.

“When I use a word, ‘it means just what I choose it to mean” the Lewis Carroll character Humpty Dumpty says. That’s an exaggeration as applied to use of terms in telecom, but the general drift is correct. 

Wednesday, October 21, 2020

2020 was Tough for Mobile Subscriptions, Better for Fixed Network Internet Access

With the caveat that usage is not identical to revenue earned from that usage, 2020 has generally not been a favorable year for mobile operator subscription growth, with a couple of exceptions, according to the Economist Information Unit. 


Fixed network internet access has held up better in most markets, with the strongest growth in the Middle East and Africa. 

source: Economist Information Unit 


Regions that saw the strongest fixed network subscription growth will see lower rates in 2021, while mobile subscription growth will improve in virtually every region in 2021.


Friday, October 16, 2020

Brownouts are an Issue, But Might be Almost Unavoidable

Brownouts tend to be a typical feature of most networks using internet protocol.  Where most measures of availability (reliability, we sometimes call it) measure times or percentages of times when a resource is unavailable to use, brownouts represent the times or percentage of times when a network or resource does not operate at designed levels of availability.


Just as an electrical brownout implies a severe drop in voltage but might not be an outage, a network brownout follows a sharp degradation in link quality but might result in the affected circuits still being technically “up,” Oracle says. “This decline may be triggered by congestion across the network or a problem on the service provider’s end.”


Brownouts are in one sense “a feature not a bug,” a deliberate design choice that prioritizes resiliency over guaranteed throughput. That is the whole architectural principle behind internet protocol, which sacrifices routing control and quality of service on defined routes in favor of resiliency gained by allowing packets to travel any available route. 


And since the availability of any complex system is the combined performance of all cumulative potential element failures, it should not come as a surprise that a complete end-to-end consumer user experience is not “five nines,” though enterprise networks with more control of transport networks and end points might be able to replicate five nines levels of performance. 


The theoretical availability of any network  is computed as 100 percent minus the product of the component failure rates (100 percent minus availability). For example, if a system uses just two independent components, each with an availability of 99.9 percent, the resulting system availability is less than 99.8 percent. 


Component

Availability

Web

85%

Application

90%

Database

99.9%

DNS

98%

Firewall

85%

Switch

99%

Data Center

99.99%

ISP

95%

source: IP Carrier 


Consider a 24×7 e-commerce site with lots of single points of failure. Note that no single part of the whole delivery chain has availability of  more than 99.99 percent, and some portions have availability as low as 85 percent.


The expected availability of the site would be 85%*90%*99.9%*98%*85%*99%*99.99%*95%, or  59.87 percent. Keep in mind that we also have to factor in device availability, operating system availability, electrical power availability and premises router availability. 


In choosing “best effort” over “quality of service,” network architects opt for “robustness” over “reliability.” 


Source: Digital Daniels

Building Something from "Nothing"

“You can only build something from nothing with a private equity mindset,” says Matthias Fackler, EQT Partners head of infrastructure Continental Europe. It’s an interesting phrase. In the case of connectivity assets, it might imply a view that infrastructure--in some cases--is worth "nothing" or very little.


The statement also illustrates two key issues in the connectivity business: low revenue growth and low profitability.


source: STL Partners


So almost by definition, if private equity firms are active in an industry, it means there are financial stresses. 


Private equity is about the buying of public assets, taking them private and then selling, typically when a public company asset is deemed to be underperforming. Quite often, the goal is to sell the assets within five years. That virtually always means that long-lived investments such as capital investment in networks are avoided, with the emphasis on operational restructuring. 


Public companies tend to “buy to keep.” Private equity always “buys to sell.” In other words, private equity acts as a turn-around specialist. They arguably excel when able to identify the one or two critical strategic levers that drive improved performance. 


They have a relentless focus on enhancing revenue, operating margins, and cash flow, plus the ability--as private entities--to make big decisions fast. That might be a greater challenge than is typical as a result of the Covid-19 pandemic, which is depressing connectivity provider revenues and profit margins.  



Thursday, October 15, 2020

NVIDIA Maxine: AI and Neural Network Assisted Conferencing

Moore's Law Shows Up in iPhone, Nvidia Video Conferencing SDKs

Moore’s Law continues to be responsible for extraordinary advances in computational power and equally-important declines in price. Apple’s new iPhone uses lidar that used to cost $75,000.


Separately, researchers at Nvidia now have Maxine, a software development kit for developers of video conferencing services that uses artificial intelligence and a neural network to reduce video bandwidth usage to one tenth of H.264. Nvidia expects Maxine also will dramatically reduce costs. 


Maxine includes application programming interfaces for the face alignment, gaze correction, face re-lighting and real time translation in addition to capabilities such as super-resolution, noise removal, closed captioning and virtual assistants, Nvidia says. 

These capabilities are fully accelerated on NVIDIA GPUs to run in real time video streaming applications in the cloud.

Maxine-based applications let service providers offer the same features to every user on any device, including computers, tablets, and phones, Nvidia says.


NVIDIA Expects to Use AI to Slash Video Conference Bandwidth



Researchers at Nvidia have demonstrated the ability to reduce video conference bandwidth by an order of magnitude. In one example, the required data rate fell from 97.28 bB/frame to a 0.1165 kB/frame – a reduction to 0.1 percent of required bandwidth.

FCC Will Clarify Section 230

Section 230 of the Communications Decency Act of 1996 was intended to promote free expression of ideas by limiting platform exposure to a range of laws that apply to other publishers.


In principle, the Act provided a safe haven for websites and platforms that wanted to provide a platform for controversial or political speech and a legal environment favorable to free expression. It has not apparently worked out that way, as there is growing concern that platforms are acting to suppress free speech. 


Section 230 says that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  


In other words, platforms that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. Ironically, a law intended to promote freedom of speech now is viewed by many as enabling the suppression of free speech. 


“Members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230 of the Communications Act,” says Federal Communications Commission Chairman Ajit Pai. “There is bipartisan support in Congress to reform the law.”


The Federal Communications Commission’s general counsel says the FCC has the legal authority to interpret Section 230 of the Communications Act of 1996. “Consistent with this advice, I intend to move forward with a rule making to clarify its meaning,” says Federal Communications Commission Chairman Ajit Pai. 


“Social media companies have a First Amendment right to free speech,” says “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.” That 


The U.S. Department of Commerce has petitioned the Commission to “clarify ambiguities in section 230.” Earlier this week, U.S. Supreme Court Justice Clarence Thomas pointed out that courts have relied upon “policy and purpose arguments to grant sweeping protections to Internet platforms’ that appear to go far beyond the actual text of the provision.“


Many believe that clarification process is likely to remove “overly broad” interpretation that in some cases shields social media companies from consumer protection laws.


It is perhaps an unfortunate development, to the extent that the antidote to limited free speech would preferably be  “more speech by more speakers,” as the corrective to market monopoly is “more competition.”


Wednesday, October 14, 2020

U.S. Supreme Court Justice Clarence Thomas Thinks Courts Have Interpreted Section 230 Too Broadly

U.S. Supreme Court Justice Clarence Thomas writes in a court order denying a writ of certiorari that some legal immunities granted to internet platforms under section 230 of the Communications Decency Act have been interpreted too broadly, a signal that at least one justice of the Supreme Court would narrow the scope of section 230 in ways that create more legal liability for major internet app platforms. 


Section 230 grants internet platforms immunity from prosecution if a third party uploads defamatory or otherwise illegal content. “Nowhere does this provision protect a company that is itself the information content provider,” Justice Thomas writes. 


In other words, when acting as a publisher, or making editorial judgments, a platform is not absolutely shielded from legal action otherwise available when libelous content is published by any media outlet.


In some sense, the discussions about section 230 involve the issue of freedom of speech, not just platform protection from third party speech and content or antitrust issues. 


Some might argue there is a possible way that First Amendment protection could be gained by third party users of a platform. The “public function exception” turns a private concern into a government operation when performing an “essential government function.” 


That seems a stretch. 


Providing an online platform or a social media site or search engine results does not clearly meet that test.


But big digital platforms, especially Facebook, Google, YouTube and Twitter, are facing growing scrutiny about monopoly power and censorship. Consider the matter of political censorship, complaints which are growing louder. 


Traditionally, the right of free speech, as enshrined in the First Amendment to the U.S. constitution, protects speakers from government censorship, but only government action. There is a long legal history that extended First Amendment protections to new electronic media.


The internet, though, and particularly the rise of social media platforms, seems to raise entirely new questions, such as whether free speech rights can, or ought to, be extended to protect citizens from censorship by private corporations. That is almost entirely new ground, and up to this point, the right of free speech does not exist on any social platform in the United States. 


But some believe the traditional right of free speech, protecting citizens from government censorship, should be expanded in an era where “certain powerful private entities—particularly social networking sites such as Facebook, Twitter, and others—can limit, control, and censor speech as much or more than governmental entities,” argues David L. Hudson Jr., Justice Robert H. Jackson Legal Fellow at the Foundation for Individual Rights in Education.


The issue is whether it is possible to enlarge the space within which constitutional protections on free speech are expanded, yet also avoid damage to private property rights of platforms. And that is the issue. It is not clear that regulation can do so, whether the issue is a remedy for business monopoly or the promotion of free speech. 


You might think the simplest answer is to simply allow people to speak their minds, with the exceptions of harassment and intimidation, threats of violence or promotion of criminal acts. But therein lies the problem, given the aggressively uncivil behavior one now sees on social media.  


What one speaker sees as the free expression of ideas will be seen as aggression and threat from another. Some 30 years ago this was not really a problem. People were simply more polite. But it is hard to mandate polite behavior. 


Many solutions seem to require “more regulation of platforms” which tends to mean “less freedom” for platforms, if arguably in pursuit of “more freedom” for speakers. And that raises an old issue: “who” has the right of free speech and its benefits, the speaker or the reader or listener. 


The U.S. Bill of Rights, the first 10 amendments to the U.S. Constitution, provided that “Congress shall make no law” prohibiting the free exercise of speech or the press. Note the language, which protects people as speakers and the “press” as a speaker from government restriction. 


Later broadcast media regulations sometimes shifted the focus a bit to the rights of listeners or viewers, rather than speakers. Generally speaking, however, the protected right is held by “speakers,” not “audiences.” 


Perhaps the seminal case was Red Lion Broadcasting Co. v. FCC (395 U.S. 367, 393 (1969), which allowed some content regulation of broadcasting for reasons of promoting the public interest. The point is that speaker rights were somewhat subordinated to the rights of viewers and listeners (the public interest). 


Complicating matters further is the issue of “who” the speaker is, in the context of a social media site or business: the platform or the users of the platform. Up to this point, it is the rights of the platform as “the speaker” which have been upheld, even if a platform supposedly is a neutral matchmaker between users who might, arguably, be considered the actual “speakers.”


The approach prioritizing the rights of audiences (listeners, readers, hearers) is exemplified by Alexander Meiklejohn’s book Free Speech and Its Relation to Self-Government, in which he says “what is essential is not that everyone shall speak, but that everything worth saying shall be said.”


All that assumes a singular public interest could even be identified. 


“Speakers in the United States have few or no legal rights when platforms take down their posts,” according to Daphne Keller, director of the Program on Platform Regulation at Stanford's Cyber Policy Center. 


Some use the analogy of must carry rules once imposed on TV broadcasters. To date, lawsuits likening platforms to “public forums” have failed. 


Also, there are different issues related to content: removal of items that violate terms of service, and the way that ranking systems operate. The former deals with removed content; the latter deals with search ranking algorithms. 


The former issue is similar to the ways stories are constructed by news media, for example. Are opposing views treated fairly and with neutral adjectives? Is the amount of space given to opposing views roughly equal? 


The latter is similar to the choice of stories to run, and not the way content is treated once a “publish” decision is made. Which stories are deemed newsworthy, and which are not?


So far, U.S. courts have held that private platforms do not have a legal obligation to carry user speech. Still, some argue that dominant platforms are de facto gatekeepers, and should be regulated as “essential providers” of political speech, or even utilities, with a common carriage obligation. 


But those claims of speaker rights also bump up against the First Amendment rights of the platforms as speakers. Ranking and removal of content is an exercise of editorial judgment, in other words. 


Largely unexamined--so far--are various methods of giving more control to platform users, says Keller. It is not easy, but some advocate more end user content control settings. The problem is that people disagree about what constitutes “hateful speech.”


Some may  want platforms to carry all legal speech. Others might simply prefer more curation, allowing civil dialogue. 


“One possible approach would let platforms act against highly offensive or dangerous content

but require them to tolerate more civil or broadly socially acceptable speech,” argues Keller. 


Again, the problem is disagreement about how to identify such offensive or dangerous content, and not simply because the censoring algorithm or reviewer simply disagrees with the expression of those views. The same sort of problems arise with efforts to apply “fairness doctrines” that essentially preserve the rights of the listener, rather than the speaker. And all such rules limit free speech rights of speakers and platforms. 


Another approach distinguishes between “hosted” content (allowing anyone to speak) and “recommended” content that appears in news feeds, for example. The former is more akin to a town square, the latter more akin to the “curated” news feeds or search results. 


Yet others might prefer some form of unbundling the ranking and sorting algorithms, allowing third parties to create their own curated feeds. None of these would be simple. None would be free of some limitations on free speech. And most could negatively affect the monetization models that make the platform services possible. 


And yet we might be moving in such directions in any case. The recent issue of political protests by professional athletes raises the issue of whether constitutional free speech rights actually have standing even in the case of private firms. 


Traditional legal doctrine has been that private actors are not constrained by the Constitution generally, under the “state action” doctrine, which holds that  “the First Amendment governs only governmental limitations on speech (Nyabwa v. Facebook, 2018 U.S. Dist. LEXIS 13981, Civil Action No. 2:17-CV-24, *2 (S.D. Tex.) (Jan. 26, 2018).”


The state action doctrine holds that only the government or those acting on its behalf are subject to constitutional scrutiny. Non-governmental conduct therefore lies beyond the Constitutional protections.


On the other hand, the exercise of free speech has recently seemed to be invoked as a right by major league sports figures whose kneeling during the playing of the national anthem is said to be an exercise of free speech rights not traditionally protected by the First Amendment. 


“The time has come to recognize that the reach of the First Amendment be expanded,” says Hudson.  


The U.S. Supreme Court recognized this reality last year in Packingham v. North Carolina (2017): 


“While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the ‘vast democratic forums of the Internet’ in general, and social media in particular,” the U.S. Supreme Court has said in the case of Packingham v. North Carolina, 137 S.Ct. 1730, 1735 (2017).


The argument is that social media networking sites have become the modern-day equivalent of traditional public forums like public parks and public streets. 


“Public communications by users of social network websites deserve First Amendment protection because they simultaneously invoke three of the interests protected by the First Amendment: freedom of speech, freedom of the press, and freedom of association,” said Benjamin F. Jackson in a 2014 law review article (Benjamin F. Jackson, Censorship and Freedom of Expression in the Age of Facebook, 44 N.M. L. Rev. 121, 134 (2014)). 


“Federal courts can and should extend First Amendment protections to communications on social network websites due to the importance these websites have assumed as forums for speech and public discourse,” he argued. 


As with debates over network neutrality, where there arguably is a difference between permissible network management and other practices some argue are violations of the basic principle of free access to lawful internet apps and services. 


For example, social network websites may censor communications in order to prevent convicted criminals from preying on victims, accusers, or witnesses or prevent certain users from harassing or intimidating other users without violating free speech principles. 


Censorship of pornographic or violent materials likewise could help create and maintain an environment acceptable to users of as many ages and sensibilities. 


Also, censorship might be necessary to prevent harm to the website due to hacking and phishing attacks and comply with copyright and trademark laws.


The Supreme Court’s reasoning in Reno v. ACLU ( 521 U.S. 844 (1997) supports treating communications on social network websites as constitutionally protected speech. 


To be sure, application of First Amendment principles to private actors would raise the issue of impairment of their property rights. To use the telecommunications analogy, that would be similar to common carrier regulation of prices and terms of service. 


There is legal precedent. Under the public function exception, “the exercise by a private entity of powers traditionally exclusively reserved to the State” constitutes state action (Jackson v. Metro. Edison Co., 419 U.S. 345, 352 (1974). That has not generally been a winning argument in the courts.


But it might be argued that social networks resemble the public spaces the Supreme Court has chosen to protect in both its public function exception (Marsh v. Alabama, 326 U.S. 501 (1946) and public forum doctrines.


The Supreme Court has held that the private property rights of a company did not “justify the State’s permitting a corporation to govern a community of citizens so as to restrict their fundamental liberties.”


The public forum doctrine was pioneered by Hague v. Committee for Industrial Organization (307 U.S. 496 (1939) and Schneider v. Irvington (308 U.S. 147 (1939). Under the public forum doctrine, restrictions on speech in public spaces that have traditionally served as a venue for free expression and debate are subject to special constitutional scrutiny.


There also is an entwinement exception, though that also would face high scrutiny. Under the entwinement concept, a  non-governmental actor might be deemed a state actor if the firm has acted together with or has obtained significant aid from state officials, beyond mere licensing,  regulation or financial aid. 


Courts have thus far rejected claims that social network websites or their parent companies  show “entwinement.” Gilmore v. City of Montgomery, 417 U.S. 556, 569 (1974); 


Some believe the “Essential Facilities Doctrine” might apply. That doctrine states that if a monopoly power is found to own a facility that is essential to other competitors succeeding in the marketplace, the monopoly must provide reasonable use of that facility. The concept has been used with respect to railroads, bridges, even operating systems and communications facilities. 


It is not clear whether it can be applied to platforms, as it has been a tool for antitrust analysis and intellectual property rights, not First Amendment freedom. 


On the other hand, the essential facilities has been applied in the case of operating system platforms, which some might liken to the role played by social media or search engines. On the other hand, the U.S. Supreme Court has not definitively upheld the doctrine as constitutional.  


While there are five potential elements of the Essential Facilities test, control of the essential facility by the monopolist or the competitors’ inability to practically or reasonably duplicate the essential facility is about all that is necessary for a court to require monopolies provide access to competitors. 


That might be easy to see in the case of railroads, public communications networks or bridges. It will be much harder to convince courts that the doctrine protects user free speech rights on platforms. 


Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...