Saturday, April 4, 2026

Sometimes "New Technology" is Not Better than the Existing Alternatives

The “heads up” display on a car’s windshield is quite the novelty. Whether it is useful is not so much the issue. Whether it is usable is the issue. 


If you are wearing polarized sunglasses while driving, the display is blocked. So now you have to make a technology decision: wear only non-polarized sunglasses; have two types of sunglasses or simply realize you aren’t going to be using the heads-up display. 


Interface

Strength

Weakness

Smartphone

Full control, rich apps

Requires attention + hands

Smart glasses

Hands-free, contextual AR

Limited display, battery

HUD (cars, aviation, AR overlays)

Immediate situational awareness

Narrow information bandwidth

Audio (voice assistants, earbuds)

Zero visual load, ambient

Low precision, ambiguity


Sometimes the cool and novel technology might not be transparent or easy to use, though designed that way. 

HUDs face calibration issues, glare in sunlight, and the tendency to clutter your field of vision rather than simplify it.


Gesture controls (BMW, Volvo) require precise, unnatural hand movements; accidental triggers are common; much slower than just pressing a button.


Voice assistants in cars often struggle with accents, road noise, and complex commands and might require you to memorize specific phrasing. 


Smartwatch notifications are easy but replying on a tiny screen, managing apps or navigating menus are clunky.


Smart glasses (Google Glass, Ray-Ban Meta) might be socially awkward to use in public. It can be hard to view the display in bright light and voice commands might feel unnatural. 


Interacting with a fitness tracker with a small display can be challenging. 


Touch-panel light switches that replace a simple tactile switch with a glass panel that you have to look at and press just right can be more work than the original interface.


Smart locks with keypads or apps can mean fumbling with your phone in the dark or cold and is arguably worse than using a key. 


Voice-controlled TVs can be convenient, but it might still be easier to just pick up the remote. 


And some audiovisual enhancements just never seem to catch on, such as 3D TVs or spatial audio headphones with head tracking. The effect is impressive for some  minutes, but then the constant recalibration and lag can become annoying. 


The point is, there typically are multiple ways to satisfy some need, and not all the ways are equally compelling, all the time. 


Use Case / Problem

Smartphone (handheld apps)

Smart Glasses (AR / wearable)

HUD (fixed display, e.g., car windshield)

Audio Interface (voice / earbuds)

Key Tradeoff

Navigation / directions

Map app, turn-by-turn directions

Directions overlaid in field of view

Turn arrows projected on windshield

Spoken directions only

Visual vs. distraction vs. convenience

Messaging / communication

Typing, reading full threads

Glanceable notifications, voice reply

Minimal alerts (e.g., incoming call)

Dictation + read-aloud messages

Precision vs. speed

Translation / language help

App-based translation (camera or text)

Real-time subtitles in view

Rare / limited

Real-time spoken translation

Latency vs. immersion (Alibaba)

Photography / recording

Manual capture via camera

First-person, hands-free capture

Not typical

Voice-triggered capture (via phone)

Control vs. immediacy

Work instructions (field work, repair)

Manuals, videos, checklists

Step-by-step AR overlays on real objects

Industrial HUDs for critical info

Audio instructions

Hands-free advantage is decisive (MDPI)

Fitness / health tracking

Apps + wearable sync

Real-time biometrics in view

Heads-up metrics (cycling, driving)

Spoken coaching feedback

Attention vs. safety

Search / information lookup

Browser or app search

Contextual info about what you see

Limited contextual prompts

Voice queries + answers

Speed vs. depth

Entertainment / media

Video, games, social media

Private AR screens / lightweight viewing

Minimal (music info, etc.)

Music, podcasts

Immersion vs. mobility

Notifications / alerts

Full notification center

Peripheral, glanceable alerts

Critical alerts only

Spoken alerts

Cognitive load management

Meetings / collaboration

Video calls, chat apps

AR annotations, shared view

Limited

Voice-only participation

Richness vs. friction

Accessibility (vision/hearing)

Accessibility apps

Real-time captions, object recognition

Limited

Screen readers, voice control

Continuous assistance advantage (MDPI)

Shopping / product info

Apps, scanning barcodes

Overlay product info in-store

Rare

Voice search

Contextual relevance

Driving / safety

Phone navigation (unsafe to handle)

Experimental (not widely used)

Core use case (speed, nav, alerts)

Voice navigation

Safety-critical context favors HUD

Daily task management

Calendars, reminders

Subtle prompts in field of view

Minimal

Voice reminders

Interrupt vs. ambient nudges


What changes is how and when you access those capabilities using hands, eyes, voice, or passive display. 


Smartphones arguably remain the “general-purpose hub,” while smart glasses, heads-up displays (HUDs), and audio interfaces specialize in reducing friction in specific contexts (hands-free work, real-time awareness, ambient computing).


The point is that the usefulness of any approach is rarely limited to one mode, one device or physical solution, and some “advanced” solutions do not always provide a better user experience.


No comments:

First Movers Can Attain Sustainable Advantage, But It is Hard to Accomplish

Some might argue that firms deploying artificial intelligence early will gain a sustainable advantage over others in their industry and ca...