Saturday, January 22, 2022

Computing Improves Linearly; Social, Economic, Political, Behavior Not So Much

Occasionally it is helpful to step back from the day-to-day and review your business, firm, industry or situation with a longer time frame. Sometimes we can only assess where we have been by doing so.


The exercise arguably is more difficult when trying to extrapolate where we are going. “Predictions are hard, especially about the future,” many, including physicist Niels Bohr, have quipped. 


“Extrapolations from techno-scientific innovations have a distressing capacity to be deterministic,” says historian Amanda Rees. One example might be the impact of computing and communications evolution on social, economic, political or scientific endeavors. 


It is easier to describe and predict some changes in computing capability than to predict how they might affect changes in the biological or social world. 


Still,  to the extent that any specific problem can be solved if sufficient computing power is available, at low cost, there are at least some indicators of potential. 


Many of us might note that we are able to use millimeter wave radio frequency spectrum for consumer and business communications only because the cost of signal processing--enabled by the reduction in computing cost, form factor, along with increases in capability--allow us to do so much sophisticated signal processing that the spectrum can be made to work for consumer communications. 


I have argued in the past that an understanding of Moore's Law “saved” the U.S. cable TV industry in the 1980s when high-definition television was developed. 


Perhaps we might also say that those same developments in performance made possible streaming video services that now are cannibalizing cable TV. 


The point is that it is difficult to extrapolate future developments in a linear way from linear improvements in computing capability. But it sometimes helps to think about the application of computing in situations where business models formerly unthinkable can become quite practical. 


Anything we see in consumer internet applications--where capabilities are supplied at no cost to users--provides an excellent illustration. The classic question is what does your business look like if a key cost constraint is removed. 


Though we might have mischaracterized key elements of the argument, ride sharing did raise questions about what it would mean if “cars were free.” They obviously are not “free,” buit musing about changes in personal transportation have happened because of the existence of ride sharing.  


The difficulty always is that other drivers of behavior also exist. Consider consumer demand for mass transit, which seems to be falling as other options--and social changes--develop. Many riders had less need--or no need--during Covid-19 pandemic restrictions on “going to work or school.” But lower mass transit ridership trends were in place even before Covid, both internationally and in the United States.  


But many speculate that the availability of ride sharing has diminished use of public transportation, though other social forces also seem to be operating. 


Likewise, we might argue that vastly-improved computing and storage price-performance curves are good enough to allow applied artificial intelligence in a growing range of use cases. Most of those use cases involve inferences about future impact based on historical metrics. 


Letting farmers know when to water or apply fertilizer, and in what quantities, should lead to improved crop production. Industrial processes likewise should be improved when we can predict when a particular machine will fail, or what must be adjusted in real time to optimize output. 


Lots of other supply chain or process processes likewise should benefit from cheap and ubiquitous ways to manage and optimize present flows of resources, whether that be people walking, cars on highways or other logistics-related issues. 


Computing progress means new applications or use cases can develop in a non-linear way, even when computing rates of development are linear. 


Technologist Ray Kurzweil noted in 2005 that “in 1968, you could buy one (Intel) transistor for a dollar. You could buy 10 million in 2002.”


Looking at the cost of a single compute cycle, Kurzweil also noted in 2005 that “the cost of a cycle of one transistor has been coming down with a halving rate of 1.1 years.”


“You get a doubling of price-performance of computing every one year,” he said. 


One likely impact in the global communications industry is the impact of AI-assisted networks on worker skill requirements, to say nothing of the improvements in network performance or availability. 


Some argue that skills will need to be upgraded as networks get smarter. The countervailing argument is that skill requirements might change, but not as much as people think. When the networks are smarter, they will be able to predict potential outages or degradations, allowing automatic or manual changes to prevent problems. 


Outside plant or core networking work might not become more complicated at all; the work might become less complicated, in terms of adjustments and maintenance. That might shift possible priorities in other ways that involve different tasks and skills, though not necessarily “higher” skills (depending on how one defines “higher). 


Perhaps more effort shifts to marketing and away from plant maintenance. That might involve different skills, but not necessarily “higher” skills for most people. When social media algorithms dictate marketing actions, the heavy lifting is done by the algorithms. 


People at firms simply need to know what outcomes they wish to achieve.


Can Blockchain Remedy Some of the Internet's "Anonymity" Problems?

The value of data transport protocols other than TCP/IP has been growing for the past decade. Those of you with long memories might recall that the global telecom industry debated the protocols for its next-generation network in the 1990s, with many favoring asynchronous transfer mode. 


To make a long story short, the industry chose TCP/IP, itself once believed to be a “transition” protocol. There were lots of reasons, but chief among them was the cost of connecting. ATM was relatively expensive; TCP/IP was radically cheaper. And not even volume deployment was going to eradicate the price differential. 


But there is another sense--beyond transmission costs--that is likely to become even more important in the decade ahead: the business value. We often forget that TCP/IP is based on “layers” that separate functions from each other.


That has led to the “over the top” way applications are created and owned on all networks using TCP/IP, and at this point that is virtually all networks. In part, that is because all networks now are computer networks, and TCP/IP was originally conceived as a data networking protocol. 


As we create applications on computing networks--applications separated from connectivity--so we also create applications running on public and private wide area networks. 


“Anonymity” has been one feature of the internet that can be troublesome, for social reasons such as enabling bullying, financial fraud, phishing, spoofing and catfishing, to note a few problems. 


But “trust” has become a significant business issue in the internet era. Is it possible that business models that inherently have better “trust” attributes could supplant much of the “zero trust” nature of the internet?


Some might argue blockchain is a candidate to change the “trust” dimension of the internet, for consumers and business users. It once was argued that anonymity was important for political dissidents living under repressive regimes, and there is logic there. 


But that same anonymity arguably makes the consumer internet less useful, and positively harmful, for a similar reason: anonymity frees human beings from the in-person courtesy and respect they might otherwise show people. Anonymity encourages extreme expressions. 


Known identities have become more important, for all sorts of good reasons. Maybe a shift to blockchain--with a new emphasis on verifiable identities, will be a good thing.


$50 a Month for Speeds Between 100 Mbps and 200 Mbps is the "Sweet Spot" for U.S. Home Broadband

The “sweet spot” for U.S. home broadband is a monthly recurring cost around $50 and speeds between 100 Mbps and 200 Mbps, which is purchased by about half of all U.S. home broadband customers. 


Pricing by independent internet service provider Vyve Broadband shows the packaging reflecting buying patterns. The 200-Mbps package sells for $50 a month. The gigabit package, likely bought by about 11 percent to 12 percent of homes, sells for about $70 a month. 


The lowest tier offers 100 Mbps for $40 a month. 


 

source: Vyve Broadband 


It might seem curious, but the new payback analysis for home broadband using fiber to the home also is about $50 per customer location, at take rates close to $50 a month, according to AT&T.


For those of you who follow the payback models for FTTH, that is somewhat shocking, as models from 20 years ago would have assumed per-customer revenue closer to $100 to $130 per month, to make the model work. 


That the revenue assumptions have changed so much reflect secular changes (declining demand for fixed network voice and linear video entertainment) as well as changes in cost structure related to operating cost and capital requirements for home broadband as well. 


It also is noteworthy that T-Mobile’s 5G home broadband service is priced at $50 a month. Though T-Mobile no longer seems to emphasize “speed,” it had in April 2021 talked about speeds up to 100 Mbps. 


Verizon’s 5G fixed wireless has recently been repriced to $50 per month, with speeds up to perhaps 300 Mbps. 


All that tells you where the mass market demand is believed to exist. The packaging will change, of course, in terms of typical speeds and prices. "More speed for the same price" as well as "significantly faster speeds for a higher price" are the two trends that will likely remain in place.


Friday, January 21, 2022

IP Was a Business Model Change, Not Just a New Networking Platform

Almost always, big changes in networking architecture and platform change the range of possible business models and market shares in the application, connectivity and infrastructure businesses. 


For example, the move to disaggregated, open and virtual networks automatically creates new potential roles for system integrators. Where platforms could be purchased monolithically, new networks can be assembled from various suppliers. 


To note only the most-obvious possible changes, monolithic platform suppliers could lose some market share to new suppliers and network integrators who supply the network components and the complete networks. 


In other words, when we move to disaggregate functions and elements, we automatically create a new need for system integration. 


So we must now look for the emergence of new names in the system integration business, as it applies to core networks and access networks. At some point, if they are willing or forced to concede some roles and revenues, the legacy monolithic network suppliers also are in line to act as system integrators, using elements and software sourced from any number of possible suppliers. 


Consider an earlier change that produced precisely those results. Because “layers” are the technology architecture, disaggregation is both possible and desirable. 


Look at data center or server businesses. Hyperscalers now build their own servers, they do not buy them. They can do so because layers make it possible. Custom software can run on commodity hardware; and commodity hardware can be built “in house.”


Hyperscalers build and own their own wide area networks, they do not have to buy services from WAN suppliers on their core data center to data center routes. 


Hyperscalers build their computing fabrics from modular arrays of servers, not monolithic mainframes. Whenever possible, they virtualize both compute and storage operations, rather than dedicating hardware to those functions. 


In a related process, “everything” is moving to virtualized supply. Enterprises and consumers can buy “services” rather than owning their own hardware and software licenses. Customers can buy computing or storage features “by the instance,” as a service, rather than building and operating their own data centers. 


It is an under-appreciated fact that when the global telecom industry selected internet protocol as its next generation platform, it also--knowingly or not--chose a layered business model. 


IP is not simply a framework for moving bits around; it is a business and revenue architecture as well, separating logical functions in ways that allow whole industry segments to emerge in a disaggregated way. 


In other words, the salience of the term “over the top” is precisely the result of a “layered” approach to building communication networks. When we disaggregate edge devices and functions from transport layer functions, and those from application functions, the revenue streams and possible business models also are disaggregated. 


That is why Meta, Amazon, Netflix and others can build businesses using networks without owning networks. 


IP was not just a technology platform change. It was a profound business model change.


"Open:" How We Got Here

Among the various conversations people had at the #PTC’22 conference are those about where networks are going, where the business is going and where revenue is to be found. Among the topics, 5G and Wi-Fi 6, network slicing, edge networks and the complications of in-home environments have been prominent. 


Among the potentially most far-ranging were questions posed by Robert Pepper, Meta head of global connectivity policy. Use of open technology is simply the latest in a series of transitions that have happened in the networking business over the last 40 years, Pepper said. 


“Disaggregated network elements are 40 years in the making,” Pepper said. The industry transitioned from analog to digital; then hardware to software functions, he noted.


The “next transition is from integrated and proprietary to open and modular networks,” he said. 


There will be big repercussions for suppliers of networking infrastructure. Where telcos 50 years ago developed and made their own gear, they then switched to buying complete networks from a handful of global suppliers. That obviously created huge new businesses, but also made telcos “captive” to a few suppliers and “vendor lock in.”


Suppliers might like that state of affairs, but buyers (telcos) hate it, it is fair to say. In a broad sense, the shift to open and modular networks also represents a shift from vendor-led to operator-led infrastructure development and supply. 


It also is fair to note that there always are private interests that benefit from any wider shift in framework. Perceived benefit hinges on where a firm or industry segment operates in the complete value chain. 


Application supplier business models depend on ubiquitous, high-quality and low-cost  internet access. Access providers are not similarly situated within the value chain. For app providers, high-quality, low-cost internet access is a prerequisite for business. For connectivity providers, access is the business. 


For an app provider, internet access is a cost of doing business. For a connectivity provider access is the core revenue stream. What the former wants is lowest-possible cost and highest-possible quality, the latter wants highest-possible revenue with minimum-possible cost. 


You might argue it is in Meta’s interest for internet access to be universal and good, as it is in a connectivity provider’s interest to reap the highest revenue from access services, with the highest margins consistent with long-term sustainability. 


If Meta is right, economics are moving in the direction of what is favorable for application creators. 


There are clear analogies in the data center or server businesses as well. Hyperscalers build their own servers, they do not buy them.  Hyperscalers build and own their own wide area networks, they do not buy services from WAN suppliers on their core data center to data center routes. 


Hyperscalers build their computing fabrics from modular arrays of servers, not monolithic mainframes. Whenever possible, they virtualize both compute and storage operations, rather than dedicating hardware to those functions. 


Moderator Gary Kim, a PTC volunteer and consultant, noted that when the global telecom industry selected internet protocol as its next generation platform, it also--knowingly or not--chose a layered business model. 


IP is not simply a framework for moving bits around; it is a business and revenue architecture as well, separating logical functions in ways that allow whole industry segments to emerge in a disaggregated way.


In other words, the salience of the term “over the top” is precisely the result of a “layered” approach to building communication networks. When we disaggregate edge devices and functions from transport layer functions, and those from application functions, the revenue streams and possible business models also are disaggregated. 


That is why Meta, Amazon, Netflix and others can build businesses using networks without owning networks. 


IP was not just a technology platform change. It was a profound business model change.




What if "Better Broadband" Actually does not "Cause" Economic Growth?

The prevailing wisdom that super-high-quality home broadband actually changes things is wildly and uncritically accepted as “truth.” That is not to deny that ubiquitous access to higher-quality broadband is to be preferred. Homes who can only get 25 Mbps will not have the same experience as households able to use 100 Mbps to 200 Mbps, when there are multiple users and multiple devices in simultaneous use. 


The issue there is bandwidth per user and device, in real time, with simultaneous use of various applications. Multiple users almost always benefit from “more bandwidth,” as is the case for every shared communications medium. 


But as a matter of science, it is impossible to actually quantify the outcomes from upgrading access--ubiquitously--from some lower level to some higher level. 


For most households, businesses and communities, almost nothing would change simply because bandwidth was increased from a moderate level (100 Mbps to 200 Mbps) to a gigabit per second, for example. 


More precisely, for any single user, trying to use any mix of applications, more bandwidth might help with experience, or might not. In other words, the benefit of “more bandwidth” depends on how many users in a home, how many online simultaneously, what they are trying to do, how many devices they are using and what the applications “need” in terms of performance. 


Downstream is one thing; upstream another thing. 


And, to be sure, our requirements drift upwards over time. That will not stop. But outcomes hinge on many things other than per-user bandwidth. We cannot actually say that student performance on homework is X percent better with Y increase in per-user bandwidth. Maybe it is; maybe it isn’t. 


And it is hard to see a true causal relationship between region economic growth and job growth, for example, as bandwidth is increased from X to Y. Regions that are growing slowly will still grow slowly, even with better broadband, because growth hinges on other matters, such as proximity to large markets. 


Regions losing population; facing industrial shutdowns or other changes in underlying conditions do not materially change simply because “better broadband” is available. 


Tourism, manufacturing or service businesses do not often relocate to distant or isolated regions because better broadband is available. There are other important reasons why a place is deemed fruitful for additional job growth or facilities. 


Better broadband does not causally change educational or industrial or professional skills possessed by the local population. You might argue that permanent work from home will change living locations. 


But most of those changes will still be tethered to population centers in key ways, simply because humans value the amenities that population density provides. So exurban changes will happen more than shifts of sizable numbers to very-remote areas. Shifts from urban to suburban likewise will be materially more important than shifts to very-rural areas. 


Yes, anecdotally, more people will spend more time in mobile modes when working away from the office. But the actual level of home broadband access quality--beyond a baseline level-- will arguably be a secondary consideration, at best, in most cases. 


Places that can be upgraded from less than 25 Mbps up to 100 Mbps are likely to be places not so desirable for workers for other reasons. Upgrades from 100 Mbps to some higher number likewise might be helpful and preferred, but not a driver of detectable performance and outcomes. 


One might argue that personal productivity now is higher with gigabit access than with dial-up. But that is a correlation. The applications I could use in the dial-up era were the real limitation, not the bandwidth. 


Today’s applications are so much richer that I cannot separate bandwidth from application richness. To the extent I might claim to be more productive, it is mostly because my applications and devices allow me to do more, in less time, not that my bandwidth--per se--allows me to do so. 


Better bandwidth is--virtually all of us agree-a good thing. But its ability to change outcomes--economic; job creation; educational outcomes--generally is overstated and hard to prove. Parental support for and involvement in their children’s education counts for much more. 


The general economic growth profile of a region matters more than the speed of broadband. The presence of large pools of workers with the right skills matters more than broadband. The quality of life of a region for such workers also arguably matters more than broadband.


In fact, we cannot disprove the thesis that highly-educated residents; fast-growing regions and industries; high incomes and high wealth “cause” better broadband, not the reverse. 


Wednesday, January 19, 2022

U.K. ISPs More than Double Gigabit Internet Access Availability in a Year

Big internet service providers are used to slings and arrows shot at them. Sometimes the criticisms of their performance seem undeserved. Consider U.K. gigabit-per-second internet access availability. 


U.K. ISPs already had said they would cover between 70 percent and 80 percent of households  of the country with gigabit-capable infrastructure by 2025, without government assistance. 


In September 2020 about 27 percent of homes could buy it. A mere year later, 46 percent of homes could buy gigabit services, according to Ofcom. 


Ofcom estimated availability had reached closer to 60 percent by the end of 2021, largely as a result of Virgin Media’s upgrade to DOCSIS 3.1, Total Telecom says. 


That is a dramatic change for a single year’s work. 


Yes, some households will be harder to upgrade, and yes, there will be additional government support to do so. 


To help deliver on these targets, in 2019, the government pledged £5 billion in public funding to help connect the most difficult-to-reach 20 percent of households, Total Telecom says. Most will be allocated no earlier than 2026, however. 


The point is that we would probably all find it hard to point to a single year when progress that dramatic was made. Basically, the major ISPs more than doubled the availability of gigabit per second service in a year’s time.


Tuesday, January 18, 2022

Not Every Acquisition Works Out

Not every acquisition works. Not every asset disposition is driven mostly by profit taking. Sometimes loss limitation is at work. And though many institutional investors or private equity firms have one business model for telecom infrastructure assets, service providers often have a different model. 


That difference in models explains why many institutional and private equity firms now are buyers of assets while many service providers are asset sellers. WindTre might be next. Lumen and Telefonica are among recent sellers. So was Cincinnati Bell.


Telecom Italia could move as well. 


Because we can” or “because we should” might explain a good deal of asset disposition behavior in the connectivity business these days. 


Optus owner Singtel, for example, is said to be mulling the sale of a stake in its Australian access facilities, a move that would allow Singtel to raise cash. 


Such opportunistic moves--as always--are driven by a combination of seller need, buyer interest and a broader rise in the value of optical fiber access and transport assets for investors in search of alternative assetshttps


Low interest rates mean lots of capital is available, while high valuations for other traditional assets also are driving investor interest in lower-valuation, higher-return financial vehicles and something more akin to a private equity approach to investing by institutional investors such as pension funds. 


Buyer interest has grown the value of optical fiber assets or the ability to create them,  while sellers are enticed by such higher valuations to monetize access network assets as they earlier monetized cell tower assets. Singtel itself sold a majority stake in its Australia cell towers in 2021. 


No doubt owner's economics still are important. But the issue is whether full ownership is required to reap that value. In a growing number of cases, partial ownership seems to be viewed favorably.  


In other areas, co-investment deals are changing the economics of optical fiber investment. 


For a number of reasons, the business model for telco and cable TV fiber to home is changing. A higher degree of government subsidy support; a desire for investment in FTTH facilities as alternative investment and competitive dynamics in the home broadband industry all mean the business case for FTTH improves. 


As one example,Cable One is part of a joint venture with GTCR LLC,  Stephens Capital Partners, The Pritzker Organization and certain members of the management team to build optical fiber to premises networks by Clearwave Fiber.


Clearwave Fiber holds the assets of Cable One’s subsidiary Clearwave Communications and certain fiber assets of Cable One’s subsidiary Hargray Communications. 


At the same time as capital investment requirements are changing, there is a shift in the assumptions about business model. 


In the late 1990s FTTH was seen as the only viable way for telcos to take market share in the linear video subscription business from cable TV operators. So the revenue upside was subscription video and internet access speeds. To be sure, video arguably was seen as the bigger revenue driver, as late 1990s telco FTTH speeds were in the 10 Mbps range. 


Bundling (triple play or dual-play) also was seen at that time as the way to compensate for competition-induced account losses. While telcos or cable each competing across the voice, business customer, internet access and video entertainment markets might have fewer total accounts, revenue per account from triple-play services would compensate. 


But something else now seems to have changed. A decade ago, independent internet service providers began to attack the market increasingly based on one service: home broadband. To be sure, many independent ISPs tried a dual-play or triple-play approach for a time. 


But nearly all eventually settled on a home broadband-only approach. Since virtually all independent ISPs face both telco and cable TV competitors, the single-product business model makes some concessions on potential revenue that necessarily must be balanced by lower capital investment and operating costs. 


The latest developments are that such tradeoffs are seen as feasible even for incumbent telcos: in other words, the business model increasingly relies on broadband as the foundation, with some contributions from voice. Video (linear or streaming) plays a lesser or no role in revenue assumptions. 


There are other changes. Subsidies have been rising for broadband deployment, and that also changes the capex requirements. Some of the investment in optical fiber also is helped by the denser optical fiber networks necessary to support 5G networks. Essentially, the payback model is bolstered by the ability to defray some optical media costs from mobile service revenue opportunities. 


Also, 5G supports home broadband using the same transmission facilities as does mobile service, often offering a chance for mobile operators to compete in the home broadband business at relatively low incremental cost. That also helps lower the cost of fixed network FTTH as more revenue is wrung from the installed assets. To the extent that higher revenue produces incrementally higher free cash flow, more capital is available to invest in additional FTTH facilities.


The incremental cost of consumer home broadband is lower once a dense trunking network must be put into place to support small cell mobile networks. 


Also, the value of FTTH facilities has changed as rival investors (institutional investors, private equity) view consumer broadband as a legitimate alternative investment. That boosts the equity value of an FTTH network and supplies new sources of investment. 


Also, the cost of FTTH construction has improved steadily over the past few decades. Also, the expected reduction of operating costs from fiber networks, as opposed to copper networks, now is well attested. So there are opex savings. 


FTTH remains a challenging investment, nonetheless. But it is noteworthy that assumptions about the business model now have changed for incumbent and new providers as well. Where it once was thought an FTTH upgrade virtually required revenue from three services, in an increasing number of cases the investment can be justified based on home broadband alone. 


In greater numbers of cases, the primary value of home broadband is supplemented by some revenues from other sources. But where a triple-play might have produced $130 per month to $200 per month revenues, home broadband might produce $50 to $80 a month. 


That projects increasingly are feasible with a $50 monthly revenue target and adoption around 40 percent to 50 percent shows how much the capex and opex assumptions have changed.


ISP Bandwidth Planning has been Remarkably Effective and Efficient

Something we learned during the Covid pandemic was that the way internet service providers engineer their networks--adding capacity in advance of demand--does work to handle unexpected demand spikes. They have been effective at building networks that can withstand even an unexpected and sudden change in the demand curve.  


On the other hand, it always also makes good business sense to invest in additional capacity only with respect to anticipated demand increases, whatever rate you believe reflects actual demand growth. AS it turns out, ISPs and their suppliers also have been good at "efficiency" in supplying new capacity.


This forecast by Point Topic illustrates the concept. Given expected demand growth, capacity growth is planned at a rate that stays ahead of demand, but not too far ahead. 


In other words, investment  is matched to revenue. The trick always is that customer segments exist. Some customers have higher demand than others. The geographic locations of those customer segments also is mixed. Business locations are mixed in with consumer locations. Higher-demand home worker locations are mixed in with lower-demand “average consumer” locations. 

source: Point Topic 


In other words, the whole network embeds assumptions about the minimum performance that must exist to handle the peak load by the heaviest users. At the same time, it makes sense not to “over-engineer” the network, adding cost that has no corresponding revenue upside. 


So much hinges on how fast any firm believes typical demand will increase. Is itr 50 percent per year; 40 percent per year or some lower figure? Those assumptions might also fail to account for improvements in networking infrastructure efficiency or the emergence of new bandwidth-intensive applications that change demand expectations.

Did a Covid Emergency Program Work? We Don't Really Know

It often is difficult to determine whether any specific government or private program to “fix a problem” actually worked. An emergency program  for broadband service might provide a case in point. 


The Emergency Broadband Benefit (“EBB”) Program, established by the Consolidated Appropriations Act of 2021 had nearly nine million participants by the end of 2021. The stated purpose was to keep low-income households connected at a time when Covid restrictions made it hard for people to go to work. 


So the U.S. Congress created a program providing up to a $50 monthly subsidy (more in tribal areas) for Internet connections, in addition to existing programs. 


The issue is how to interpret program success. The stated objective was to “keep people connected. 


The problem is that most of the people using the temporary program also were using the existing programs. So it is akin to trying to  “prove a negative” (proving something to be true--with certainty--in the absence of evidence).


Households on support programs did not disconnect. What we do not know is whether they would have disconnected in the absence of the emergency program. 


“My analysis suggests that in November 2021, Lifeline subscribers (households receiving discounted service) accounted for about 80 percent of EBB participation,” says George Ford, Phoenix Center chief economist. “With broadband adoption by low-income Americans being about 75 percent, it could be that only about five percent of EBB participants were not previously online.”


What we might be able to say is that the “EBB Program did not appear to be increasing broadband adoption by much, though it may be argued that was not the point,” says Ford. “The point of the EBB Program was not necessarily to expand adoption but to maintain it during the pandemic’s economic malaise, so perhaps this finding is untroubling.”


Still, we do not know what might have happened if the EBB did not exist.


Monday, January 17, 2022

Telefonica Selling Copper Lines to Macquarie

Telefónica is selling part of its copper network to the Macquarie fund for 200 million euros. 

It might seem a curious transaction, as the copper access lines are described as “obsolete infrastructure.” It is not clear how many access lines are part of the deal. 


But Macquarie plans to upgrade those copper lines with optical fiber access, betting it can assure itself a long-term stable source of cash flow, functioning as an alternative asset in its portfolio. 


Separately, many other telcos with copper assets have concluded they need to upgrade copper access to  fiber-to-home rapidly, as a matter of protecting asset value and revenues. In substantial part, all internet access providers are having to consider similar moves to keep pace with the competition, including BT and Virgin in the United Kingdom, for example. 


In May 2021, for example, 40 percent of U.K. homes (11.6 million) had access to gigabit-capable broadband, according to Ofcom. About 24 percent were covered by FTTH access facilities, according to S&P Global Intelligence. 


As BT steps up its FTTH deployments, competitors believe they must get there first, or get there to stay competitive. 


Monk Seal Visits #PTC22

I realize this looks like a piece of drift wood. It actually is a Hawaiian monk seal the hauled up on the beach in fron of the Hilton Hawaiian Villiage, where #PTC22 is being held, Sunday earcly evening.  

In more than 20-some years of attending PTC, this is the second time I've seen one on Waikiki. 


They don't do much but sleep when they come ashore. 

Some say this is just a log. The Hilton staff who surrounded it with traffic cones and barrier tape did not believe it was a log. 

The Hawaiian monk seal is one of the most endangered seal species in the world, according to the National Oceanic and Atmospheric Administration.


Marine Mammal Center

The population overall had been declining for six decades and current numbers, though increasing, are only about a third of historic population levels.

Hawaiian monk seals are found in the Hawaiian archipelago which includes both the main and Northwestern Hawaiian Islands and rarely at Johnston Atoll which lies nearly 1,000 miles southwest of Hawai'i. These monk seals are endemic to these islands, occurring nowhere else in the world. Hawaiian monk seals are protected under the Endangered Species Act, the Marine Mammal Protection Act, and State of Hawai'i law.




One Downside of Hybrid Business Events

One of the issues with hybird trade shows and buisness conferences is how to take a group photo when half the people are remotely attending. The Pacific Telecommunications Council head its first Advisory Council meeting of the year at PTC'22 and most attendees were attending virtually, including AC members, legal counsel, chair of the Board of Governors. 

Yali Liu, Bert Crinks, Felix Seda, Patricia Paoletta, Darren Yong, Gary Kim, Nico Grove, Jim Poole (sitting in for Alex Vaxmonsky, Isabel Paradis and Nakul Rege are in front of the screen. 

Secretariat members Sharon Nakama, Liane Kobayashi, Jancie Spencer and Nicole Fuertes also attended, as did Tara Giunta, John Gasparini, Sean Bergin, Stephan BeckertThomas “Tom” Cooper, Mark Dando, Joe Zhu, Brandon Amber, Mohamed Elagazy, John Garret, Heng Lu, Robert Mitchell, Francis Pereira, Masaaki Sakamaki, Muhammad Rashid Shafi and Una Zheng were others on the call, as I recall. 

We got our work done, but it really is harder than when we are all face-to-face in the same room. 













Saturday, January 15, 2022

Home Broadband Costs in U.K. are Low, But Vary by Average Monthly Income

With the caveat that entry-level home broadband is different from the typical level of service most people purchase, or the highest-performing tier of service available, entry-level broadband prices in the United Kingdom show the relative cost of entry-level  home broadband as a percentage of monthly income. 

source: Point Topic 


Those prices range from a low of 0.35 percent of monthly income in London to about 1.4 percent of monthly income in the most-costly area. Much of the disparity is caused by differences in typical monthly income. 


With fixed prices, higher income leads to lower costs as a percentage of income.


Indirect Monetization of Language Models is Likely

Monetization of most language models might ultimately come down to the ability to earn revenues indirectly, as AI is used to add useful fe...