Showing posts sorted by date for query technology adoption. Sort by relevance Show all posts
Showing posts sorted by date for query technology adoption. Sort by relevance Show all posts

Saturday, January 31, 2026

"Lean Back" and "Lean Forward" Differences Might Always Condition VR or Metaverse Adoption

By now, it is hard to argue against the idea that the commercial adoption of “metaverse” and “virtual reality” for consumer media was indeed “too early” in terms of timing and infrastructure.


Perhaps debatable are the reasons “why” metaverse failed to catch favor with consumers. In one sense, the entire evolution of electronic media (and live performance before it) trends in the direction of greater realism. 


In some cases, evolution towards greater immersion and interaction might also be the case (digital media such as social media is based on interactivity). 


Many earlier technologies also failed commercially at first (or were too expensive/limited) but later succeeded when hardware, networks, and user expectations caught up. The issue now is whether VR is now on a similar path, only waiting for further improvements in hardware, content, and ecosystem to reach broad commercial success. If those cease to be barriers, VR might flourish. 


But those are big “ifs.”


The distinction between “lean forward” experiences (interactive content and media) and “lean back” content experiences (passive entertainment and content consumption) arguably remains. 

 

The heavy hype around the metaverse (2021–2023) greatly outpaced the actual state of the technology and user readiness. 


At the same time, there was no killer application that made the metaverse uniquely valuable. 


For retail, shopping in a virtual store was more effort than clicking on Amazon; for social, a video call or chat was simpler than navigating a 3D world. 


Still, we might ponder the idea that the metaverse is part of the evolution of media in the direction of greater realism, immersion, and interactivity. But experience with 3D video content so far suggests other issues. 


That is not to say earlier important technologies or products similarly were “too early” for mass adoption. Indeed, that does happen, and relatively often. 


Technology / Product

Era (Initial Launch)

Why It Failed Then

Later Commercial Successors

AT&T Picturephone

1964

Too expensive, poor quality, low bandwidth, seen as intrusive

Video calling via Skype, FaceTime, Zoom, Facebook Portal (2000s–2010s)

Apple Newton MessagePad

1993

Poor handwriting recognition, bulky, limited battery, no cellular data

iPhone (2007 onward), modern smartphones

Microsoft Tablet PC

Early 2000s

Expensive, heavy, complex OS, marketed as desktop replacement

Apple iPad (2010 onward), modern tablets

Nintendo Virtual Boy

1995

Monochrome display, caused discomfort, weak software, niche appeal

Oculus Rift, HTC Vive, PlayStation VR, modern VR headsets (2016 onward)

Sony Glasstron (HMD)

1997

Low resolution, bulky, limited use cases, expensive

Modern VR/AR headsets (Meta Quest, HoloLens, etc.)

Philips CD-i (CD-based console)

1992

Weak hardware, poor game library, limited developer support

DVD/Blu-ray consoles, modern optical disc games, digital distribution

Microsoft SPOT Watch

2004

Limited data (FM-based), no Wi‑Fi, high price, limited aesthetics

Apple Watch, Samsung Galaxy Watch, modern smartwatches

LoudCloud (cloud computing)

1999

Market not ready for cloud services, infrastructure immature

Amazon Web Services (AWS), Microsoft Azure, Google Cloud

SixDegrees.com (social network)

1997

Early internet adoption, low bandwidth, no newsfeed, limited features

Facebook, LinkedIn, Instagram, TikTok

Ask Jeeves (early search)

1996

Simple interface, limited results, not as powerful as later search engines

Google search, modern AI-powered search assistants


These all embody the same pattern: a visionary idea was technically possible, but the ecosystem (hardware, networks, software, business models, and user habits) had not yet matured enough for mass adoption.


But we might argue that is not entirely the issue for 3D and metaverse. The barriers might extend beyond hardware cost and complexity; physical ease of use; compelling content availability; a killer use case or app; lack of widely-standard platforms or other social and behavioral barriers.


VR is unlikely to become truly mainstream because Metaverse and VR face many of the same adoption barriers that plagued 3D TV and cinema 3D content.


Arguably, 3D content, VR, and the metaverse struggle with a few overlapping problems beyond hardware friction and cost; compelling content; lack of a universal platform or value proposition. 


The problem still includes the distinction between “lean forward” and “lean back” media consumption. 


Arguably, 3D content consumption clashes with the casual, social way people consumeTV (eating, multitasking, impromptu guests).


VR headsets block the real world, making it hard to chat with others in the room or quickly switch between the virtual and physical. This makes VR ill‑suited for many everyday social and entertainment scenarios where people currently use flat screens.


VR and the metaverse face additional, deeper challenges. Even 3D TV was still a passive, screen‑based experience (lean back). VR and metaverse demand active participation. They are “lean forward” experiences. 


3D TV was trying to graft a known experience of cinema) into the home. VR and metaverse ask users to adopt entirely new behaviors and actually “work” to create the experience: substituting “lean forward” for “lean back.”


That might suggest a “somewhat niche” adoption pattern: adoption in specific, high‑value domains gaming, training, simulation or remote collaboration where the behavior is expected to be “lean forward.”


But widespread adoption as a “lean back” experience such as television and entertainment video might never happen. People don’t really want to “work” when they consume entertainment video. 


In that sense, some of us were wrong to think the metaverse was simply the next evolution of realism in “lean back” entertainment media. It might be an evolution of “lean forward” social media, learning and gaming experiences.


Wednesday, January 28, 2026

Has AI Use Reached an Inflection Point, or Not?

As always, we might well disagree about the latest statistics on AI usage.


The proportion of U.S. employees who report using artificial intelligence daily rose from 10 percent to 12 percent in the fourth quarter of 2025, a Gallup survey finds. 


Frequent use, defined as using AI at work at least a few times a week, has also inched up three percentage points to 26 percent.


source: Gallup 


The percentage of those who use AI at work at least a few times a year was flat in the fourth quarter of 2025.  


And nearly half of U.S. workers (49 percent) report that they “never” use AI in their role.


As always, that data will be interpreted in several possible and contradictory ways:

  • Not every job role requires AI

  • Some use cases and verticals use AI heavily

  • Adoption has reached an inflection point

  • Adoption is quite fast

  • Adoption is slowing


source: Gallup 


Some of us might argue that AI is at an adoption rate inflection point, the historical precedent being that adoption shifts to a higher gear once about 10 percent of consumers use any particular technology. 


Also, Amara's Law suggests the impact is likely to be less than we expect in the short term (as in, “now” or “today”), while long-term impact will be greater than we anticipate.


Amara’s Law suggests that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.


Source


“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years” is a quote whose provenance is unknown, though some attribute it to Standord computer scientist Roy Amara. Some people call it the “Gate’s Law.”


Some products or technologies (and AI might be among them) can take decades to reach mass adoption, especially if we start tracking adoption from the time a new technology is discovered, rather than “starting the clock” when “commercialization” begins. 


The “next big thing” will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example, and machine learning is only now in widespread use. 


Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would also note.


The point is that it is way too early to discern the actual productivity gains AI will eventually deliver. We will expect more, and be disappointed, over the short term. But we will underestimate impact over the longer term. 


And there is good reason to believe that the uptake in adoption has only just been reached.


Monday, January 26, 2026

Clear AI Productivity? Remember History: It Will Take Time

History is quite useful for many things. For example, when some argue that AI adoption still lags, that observation, even when accurate, ignores the general history of computing technology adoption, which is that it takes longer than most expect. 


Consider a widely-discussed MIT study that was also widely misinterpreted. Press reports said the study showed AI was not producing productivity gains at enterprises.


So all we really know is that pilot projects have not yet shown productivity gains at the whole-enterprise level. And how could they? 


Much has been made of a study suggesting 95 percent of enterprises deploying artificial intelligence are not seeing a return on investment.


There’s just one glaring problem: the report points out that just five percent of those entities have AI in a “production” stage. The rest are pilots or limited early deployments. 


That significant gap between AI experimentation and successful, large-scale deployment arguably explains most of the sensationalized claim that “only five percent of enterprises” are seeing return on AI investment. 


It would be much more accurate to say that most enterprises have not yet deployed AI at scale, and therefore we cannot yet ascertain potential impact. 


But that is not unusual for any important new computing technology. Adoption at scale takes time. 


Consider the adoption of personal computers, ignoring the early hobbyist phases prior to 1981, which would lengthen the adoption period. At best, 10-percent adoption happened in four years, but 50-percent adoption took 19 years. 


It took at least five years for the visual web to reach 10-percent adoption, and about a decade to reach 50-percent usage. 


For home broadband, using a very-conservative definition of “broadband,” (perhaps 1.5 Mbps up to perhaps 100 Mbps), it took seven years to reach half of U.S. homes.  


Technology

Commercial Start (Year)

Time to 10% Adoption

Time to 50% Adoption

The "Lag" Context

Personal Computer

1981 (IBM PC launch)

~4 Years (1985)

~19 Years (2000)

High Lag. Slowed by high cost ($1,500+), lack of connectivity (pre-internet), and steep learning curve (DOS/early Windows).

Internet

1991 (WWW available)

~5 Years (1996)

~10 Years (2001)

Medium Lag. Required physical infrastructure (cables/modems) and ISP subscription growth. "Network effects" accelerated it rapidly in the late 90s.

Broadband

~2000 (Cable/DSL)

~2 Years (2002)

~7 Years (2007)

Medium Lag. Replaced dial-up. Dependent on telecom providers upgrading last-mile infrastructure to homes.

Smartphone

2007 (iPhone launch)

~2 Years (2009)

~5-6 Years (2012-13)

Low Lag. Piggybacked on existing cellular networks. High replacement rate of mobile phones accelerated hardware turnover.

Tablet

2010 (iPad launch)

~2 Years (2012)

~5 Years (2015)

Low Lag. Benefited from the "post-PC" era ecosystem. Familiar interface (iOS/Android) meant zero learning curve for smartphone users.

Generative AI

2022 (ChatGPT launch)

< 1 Year (2023)

~2-3 Years (Proj. 2025)*

Near-Zero Lag. Instant global distribution via browser/app. "Freemium" models removed cost barriers. Adoption is currently outpacing the smartphone and internet.


The point is that widespread adoption of any popular and important consumer computing technology does take longer than we generally imagine. 


AI adoption is only at the very early stages. It will take some time for workflows to be redesigned; apps to be created and redesigned and user behavior to start to match the new capabilities. 


It is unreasonable to expect widespread evidence of productivity benefits so soon after introduction, even if new technologies now seemingly are adopted at a faster rate than prior innovations.


"Lean Back" and "Lean Forward" Differences Might Always Condition VR or Metaverse Adoption

By now, it is hard to argue against the idea that the commercial adoption of “ metaverse ” and “ virtual reality ” for consumer media was in...