Wednesday, June 26, 2024

U.S. Consumers Often Choose "Good Enough" Home Broadband

Though regulators and advocates often focus mostly on availability (coverage) and quality (speed), consumers often prioritize value, preferring a “good enough for my needs” approach where recurring price might be more important than raw performance. 


And that might explain the demand for fixed wireless services in many markets, where FWA offers enough speed at lower prices than services provided by telco digital subscriber line, telco fiber-to-home or cable TV home broadband. 


In the U.S. market, FWA appears to have dented demand for cable services, in particular. 

source: Opensignal 


Though most consumers would likely have a hard time quantifying how much speed their households require, it remains true that beyond a fairly low level of access speed, users gain very little, in terms of performance (quality of experience) when shifting to services offering speeds faster than about 25 Mbps per user in the downstream.  

source: FCC 


To be sure, the percentage of U.S. customers buying gigabit or multi-gigabit services has steadily increased since 2021, while the percentage of customers buying service at speeds below 50 Mbps has dropped.


But it might be worth noting that about 25 percent of the home broadband market continues to buy services at the lower end (200 Mbps or less) of home broadband speeds sold by ISPs. That suggests the market opportunity for FWA is about a quarter of the market, so long as speeds top out around 200 Mbps. 


At the moment, FWA could address more than half the U.S. market if it were upgraded to offer speeds up to 400 Mbps. 


source: OpenVault 


We might well assume that buyers of gigabit-speed services are mostly driven by service “quality” as measured by downstream speed, upstream speed, unlimited data usage and network reliability. That might broadly account for up to 30 percent of the market. 


This segment includes gamers, streamers, professionals with high bandwidth needs and larger families. 


In contrast, perhaps up to 30 percent of customers seem to buy the most affordable option, including students, budget-conscious families, or those with limited online activity. By definition this segment is most price conscious, even if that means lower speeds. 


The broad middle of the market might represent up to 40 percent of customers who balance price and features. This segment prefers “good enough” speeds, sufficient data allowances, and reliable service, at a reasonable cost, somewhere between the most-expensive and most-affordable tiers of service. 


Service Tier

Percentage

Key Driver

Higher-Cost (High Speeds, Unlimited Data)

30 percent

Value-Driven

Median Cost (Balanced Speeds & Data)

40 percent

Balanced Value

Lower-Cost (Lower Speeds, Data Caps)

30 percent

Price-Driven


In many cases, FWA could appeal to both “balanced value” and “price-driven” segments of the market, in particular for single-person households or dual-person households with lower usage. 


How Much New Capacity Will 6G Require?

Discussing spectrum and capacity needs for 6G networks, the NextG Alliance suggests that the highest requirements will be for business-to-business or business-to-consumer applications such as extended reality, which might require 500 Mbps or more. 


Other relatively-high-bandwidth use cases include entertainment (100 Mbps to 500 Mbps) and robotics and autonomous systems. 

source: NextG Alliance 


That noted, most use cases will require far less bandwidth. 


Tuesday, June 25, 2024

Are "Scale" and "New Expressiion" the Key Elements for AI Copyright Issues?

New information and communications technologies have consistently raised novel challenges for copyright law and its application, so it is not surprising that artificial intelligence is raising issues as well. But most prior copyright issues seem to revolve around duplicating existing content.


The whole point of generative artificial intelligence is the creation of new content. And that would seem to place generative AI created content outside copyright infringement.


In most cases, issues arise because digital technology eases issues related to the cost of copying and sharing information. Peer-to-peer file sharing; content streaming; news aggregation and search engine results and remixing of content provide examples. 


Earlier device technologies ranging from copiers to videocassette recorders have also raised new copyright issues. But all those issues have been around the copyring and distribution of existing content.


One might argue that since generative artificial intelligence models are built and trained by essentially using web crawlers to index content found on the internet, the copyright issues center on "fair use" and infringement of that right.


That might not be so easy to apply to generative AI, whose purpose is the creation of new content, not the copying and distribution of existing copyrighted work.,


The core of all copyright issues is the principle that ideas themselves cannot be copyrighted, only their specific expressions. And generative AI challenges that framework. in new ways.


Innovation

Case

Issue

Outcome

VCR

Sony Corp. v. Universal City Studios, Inc. (1984)

Is using a VCR to record copyrighted TV shows copyright infringement?

Court ruled in favor of Sony, stating recording for later viewing was fair use.

Digital Music

Napster Inc. v. A&M Records Inc. (2001)

Is a peer-to-peer file-sharing service liable for copyright infringement by its users?

Court ruled Napster liable for failing to prevent copyright infringement.

Digital Music

Grokster, Ltd. v. MGM Studios, Inc. (2005)

Can a company that provides file-sharing software be liable for copyright infringement by its users?

Court ruled Grokster could be liable if they knew users were infringing and didn't take steps to stop it.

Electronic Publishing

Authors Guild v. Google Inc. (2012)

Does Google scanning entire libraries and allowing users to search snippets of copyrighted books constitute fair use?

Court ruled Google's scanning was a fair use for transformative research purposes.


Some will argue that the GenAI training process is analogous to how humans learn and create. Just as a human artist or writer might study and internalize various works to develop their own style and ideas, AI models analyze patterns in large datasets to generate new content. 


So AI training is transformative and falls under fair use, as it doesn't directly reproduce copyrighted material but rather learns from it to create something new.


The analogy essentially is that, if it's legal for humans to read books or view art and then use that knowledge to create new works, AI should be allowed to do the same. 


AI companies contend that their models don't copy training data but learn associations between elements like words and pixels, similar to how humans process information.


Critics argue that the scale and comprehensiveness of AI training sets it apart from human learning, so that scale itself is a new issue. Perhaps the issue there is not so much the “reading” or “viewing” of content but the issue of maintaining the market value of copyrighted work. 


And the specific new issue is simply that generative AI is so efficient at ingesting knowledge and using what it learns to create new content, compared to humans. 


Compared to older forms of technology, which generally only presented the issue of content reproduction cost, AI raises far-bigger issues about the creation of new content.


Monday, June 24, 2024

VR, AR and "Lean Forward" Versus "Lean Back" Experiences

It is too early to determine whether various forms of virtual reality; extended or augmented reality will prove to be marketplace successes, and if so, where and to what degree, even if we might expect the greatest appeal to be in “lean forward” experiences, not “lean back” experiences such as entertainment video. 


Earlier efforts intended to increase the realism of “lean back” content experiences have fizzled, notably 3D and motion simulators, while other less-intrusive innovations such as high-definition TV and 4K have gotten much-better acceptance from consumers. 


Early content innovations including movie color and sound did not face consumer adoption issues as they were incremental improvements to an existing experience and did not require purchasing of new equipment or changes in behavior. 


That began to change as television was introduced, since consumers sometimes must buy new equipment to take advantage of enhancements such as color, stereo and other sound enhancements or higher-image quality features, internet access and so forth. 

 

But HDTV and 4K offer incremental improvements to the existing “lean back” viewing experience, not wholesale changes in experience. That might not be so true of VR or AR, which arguably mostly enhance “lean forward” interactive experiences. 


So some amount of consumer resistance to virtual reality games and other content might be attributed to equipment cost or user convenience and other issues such as a “killer” application or use case. Also, a simple lack of content could have been a barrier. 


Regarding virtual reality, some users experience motion sickness when using VR headsets. Another issue is the high cost of entry for high-quality VR hardware and content. The most-immersive and compelling VR experiences often require expensive headsets, powerful computers or gaming consoles, and specialized software or games. 


Some of the same issues--equipment cost; discomfort and inconvenience--have arguably limited 3D content success. 


The point is that VR and AR might be uncomfortably more similar to 3D TV than to color and image quality for television and video content experiences, potentially enhancing “lean forward” rather than “lean back” experiences such as traditional TV and video experiences. 


In other words, AR and VR extend the interactive media experience (gaming, web browsing, social media, shopping, learning), with far less relevance for “lean back” entertainment video. 


"Lean forward" media refers to interactive experiences that require active participation from the user, while "lean back" media involves passive consumption of content. VR and AR are fundamentally designed to be interactive and immersive, aligning them more closely with the "lean forward" paradigm. 


So VR and AR are extensions of gaming and other interactive experiences such as search, e-commerce or e-learning or social media, and not so much an extension of lean-back media. 


Lean-forward media typically involves shorter attention spans, as users actively seek specific information.

Lean-back media typically works with longer attention spans. Lean-forward is more active; lean-back is more passive. 


So the logical issues are perhaps centered on how VR and AR can enhance interactive media, and not so much how those platforms apply to lean-back media. Most of the successful innovations related to lean-back media enhance the realism of the experience, so it is not impossible for VR and AR to create value for traditionally passive content consumption. 


But storytelling remains central for entertainment. That is not true for most interactive media, where there typically is some goal-oriented purpose (communicate, play, shop, learn). 


That noted, the perhaps more promising use cases for lean-back media include performances, concerts and live events; theme park attractions or sports content. 


Sunday, June 23, 2024

Study Finds 48% of AI Projects are Halted in Progress

Nearly half of new business artificial intelligence projects are abandoned midway, a study conducted by international law firm DLA Piper finds. The study of 600 executives suggests that although more than 40 percent of organizations fear that their core business models will become obsolete unless they adopt AI technologies, 48 percent of the companies that embarked on AI projects have been forced to pause or rollback them. 


The primary reasons for these setbacks include concerns over data privacy (48 percent), issues related to data ownership and inadequate regulatory frameworks (37 percent), customer apprehensions (35 percent), the emergence of new technologies (33 percent), and employee concerns (29 percent).


That statistic will seem congruent with other estimates that as many as 70 percent of information technology projects fail.


Study/Report

Key Findings

Standish Group’s Annual CHAOS Report (2020)

66% of technology projects end in partial or total failure. Large projects are successful less than 10% of the time.

McKinsey (2020)

17% of large IT projects go so badly that they threaten the very existence of the company.

Boston Consulting Group (BCG) (2020)

70% of digital transformation efforts fall short of meeting their targets.

Consortium for IT Software Quality (CISQ) (2020)

The total cost of unsuccessful development projects among US firms is estimated at $260 billion, with operational failures caused by poor quality software costing $1.56 trillion.

KPMG Technology Survey (2023)

51% of US technology executives reported no increase in performance or profitability from their digital transformation investments in the past two years.

Soren Lauesen (2020)

Identified 37 root causes and 22 potential cures for IT project failures, emphasizing poor project management, cost estimation, and requirements.

Global Capacity Networks Now Hinge on Hyperscalers, Obviously

The “hyperscale” moniker for some data centers is well earned. Though such cloud hyperscale data centers represent a fraction of all data centers--measured by energy consumption--they might conduct more than 95 percent of all compute instances. 


And while connectivity demand is generally driven by data centers these days, it is the connections between hyperscale sites and the internet points of presence that arguably are most vital. 

source: Goldman Sachs 


By definition, the global “backbone” networks connect major traffic sources with each other, and not consumers directly at the local level. And even allowing for some overlap (data centers require local access connections to internet points of presence; major backbone networks and other data centers), it by now is obvious that data centers drive capacity requirements of the global capacity networks. 


Capacity Segment

Percentage of Demand

Hyperscale Data Centers

50-60%

Traditional Data Centers

30-40%

Telco Voice Switches (legacy)

5-10% (decreasing)

Wide Area Network (WAN) Backbone

20-25%

Shorter Connections to Internet Points of Presence (PoPs)

10-15%


Saturday, June 22, 2024

Moore's Law Slowing is Counterbalanced by Other Developments

It is possible to argue that Moore’s Law (which suggests a doubling of transistor density about every 12 to 18 months, or about every two years in practice) has only slowed, and not stopped, since the mid-1960s. 


Time Period

Doubling Time (Years)

1965-1975

1

1975-2000

2

2000-2010

2-3

2010-Present

                          3+


On the other hand, consider the transistor densities we now have to “double.” As we push the boundaries of how closely together transistors and pathways can be spaced, it becomes more difficult to manufacture the chips. 


Year

Processor Model

Transistor Count

Doubling Time (Years)

1971

Intel 4004

2,300

-

1974

Intel 8080

6,000

3

1978

Intel 8086

29,000

4

1982

Intel 80286

134,000

4

1985

Intel 80386

275,000

3

1989

Intel 80486

1,200,000

4

1993

Pentium

3,100,000

4

1997

Pentium II

7,500,000

4

1999

Pentium III

9,500,000

2

2000

Pentium 4

42,000,000

1

2006

Core 2 Duo

291,000,000

6

2010

Core i7

1,170,000,000

4

2014

Core i7 (Haswell)

1,400,000,000

4

2018

Core i9

2,000,000,000

4

2022

Apple M1

16,000,000,000

4


Similar slowing can be seen in accelerator and graphics processing chips. 


Year

GPU Model

Transistor Count

Performance Improvement

Doubling Time (Years)

2012

Kepler (GK110)

7.1 billion

Baseline

-

2016

Pascal (GP100)

15.3 billion

2x

4

2018

Turing (TU102)

18.6 billion

1.2x

2

2020

Ampere (GA102)

28.3 billion

1.5x

2

2024

Blackwell (B200)

208 billion

30x

4


The other issue is that transistor counts are not the only important variables. Parallel processing is an architectural shift that prioritizes throughput over raw clock speed.


Accelerator chips are designed for specific tasks like AI or video processing and their task-specific metrics arguably are more important than simple clock speed.


Heterogeneous computing combines CPUs, GPUs, and accelerators for optimal performance across different workloads, meaning overall system performance is more relevant than individual component speeds.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...