Monday, April 6, 2026

Gemma 4 is Designed to Run on Edge Devices Such as Smartphones, Using Apache 2.0 License

Gemma 4, Google’s latest open source artificial intelligence model, is probably important for several reasons. For starters, it uses an Apache 2.0 license model, which means that developers can take Gemma 4, fine-tune it, ship it in a product, charge money for it, and Google has no claim over what you built.

You might argue that closed models are irrelevant for most independent developers and small companies, as they are expensive at scale, opaque, and make developers permanently dependent on another company’s pricing decisions.


Gemma changes the payback model. You host it, control the data and tune it to your use case. Developers pay for compute, not per-token fees.


Also, the models are engineered from the ground up for maximum compute and memory efficiency, to preserve RAM and battery life. 


“These multimodal models run completely offline with near-zero latency across edge devices like phones, Raspberry Pi, and NVIDIA Jetson Orin Nano,” Google notes, with the more-complex models running on a single graphics processor unit.


But Gemma 4 is optimized for on-device and low-resource environments, including mobile. That enables:


Since Gemma 4 reduces inference and application programming interface costs, which are run locally, startups and independent developers can build AI products with much lower marginal cost, expanding the range of viable business models, especially for specialized use cases.  


Of course, as often is the case for open source, there are advantages for the sponsor. 


Historically, Google uses open tools to drive developer adoption and ecosystem lock-in, and Gemma 4 arguably fits that pattern:

  • Free/open models attract developers

  • Developers build apps

  • Apps are hopefully hosted on Google Cloud. 


Ideally, from Google’s point of view, the idea is to remain relevant no matter what happens with the cloud computing inference business


Old model

Emerging model

Centralized cloud inference

Distributed + edge inference

Pay-per-API-call

Local + hybrid

Vendor-controlled

Developer-controlled


Also, Gemma 4 diversifies Google’s model approach. Where Gemma targets the segment of the market requiring  open, lightweight, customizable solutions, Gemini focuses on the segment where proprietary, frontier, premium models are valued. 


So Gemma should appeal to users focused on experimentation, edge computing and cost-sensitive use cases. Gemini remains focused on high-end reasoning and enterprise-grade reliability.


Sunday, April 5, 2026

First Movers Can Attain Sustainable Advantage, But It is Hard to Accomplish

Some might argue that firms deploying artificial intelligence early will gain a sustainable advantage over others in their industry and category. I tend to doubt that. 


The issue is that some firms might have other advantages. They might be better managed in general; more adaptable; more agile. The point is that they might do most things better than competitors, including using new technology. 


The management literature generally supports the idea that sustainable advantage is quite rare, and where it happens, might be explained by other advantages than the early deployment of a new technology. 


In short, being first to deploy a significant technology is, by itself, not a reliable source of sustained competitive advantage.


Lieberman and Montgomery's First-Mover Advantages are foundational studies.


The time for competitors to enter a new product market has shrunk dramatically from 33 years early in the 20th century to 3.4 years later in the century and continues to shrink. So first-mover advantage exists, but most likely will be a fleeting advantage. 


And early technology adoption by itself is not a determinant of sustained competitive advantage. Instead, it usually indicates there are other mechanisms at work. 


Early movers only create enduring benefits when their timing advantage is simultaneously mobilized with network effects, aligned with competence-enhancing trajectories, sustained partner rents across the value network and continuous absorption of external knowledge, for example. 


Almost by definition, new technologies that prove to have value will be accessible to all, over time (technology that enterprises can afford eventually are adapted for mid-market and finally small businesses or individuals. 


For competitive advantage to be sustained over time, barriers to imitation must exist. 


“The problem for Kmart and other wannabe Wal-Marts is what Lippman and Rumelt

refer to as causal ambiguity,” say the authors of The Analysis of Competitive Advantage.  


“The more multidimensional a firm’s competitive advantages might be, and the more each dimension of competitive advantage is based on complex bundles of organizational capabilities rather than individual resources, the more difficult it is for a competitor to diagnose the determinants of success,” they state. 


“The outcome of causal ambiguity is uncertain imitability: where there is ambiguity associated with

the causes of a competitor’s success, any attempt to imitate that strategy is subject to

uncertain success,” they add.


In other words, sustainable advantage might happen when competitors are uncertain about how a particular innovation adds value, in a mix of value drivers. 


On the other hand, that is probably the outlier. 


Contemporary research from Harvard Business School indicates that first-mover companies achieve sustainable competitive advantage in only 37 percent of new market categories, while fast followers demonstrate superior long-term profitability in 42 percent  of markets studied.


Being a fast follower often results in long-term advantage, some studies have found. 


Still, some studies suggest when sustainable advantage might be generated. 


Network effects are a classic example. When a product or platform becomes more valuable as more people use it, early movers can lock in users before alternatives exist. 


Proprietary technology plus early entry can offer sustainable advantages, as can switching costs and ecosystem lock-in.


Still, first movers often do not succeed. When a new technology destroys the value of an incumbent's existing capabilities or partner relationships, early deployment can be a liability, if fast followers are agile enough. 


But the rule might well be that when new technologies are able to gain wide adoption, sustainable advantage cannot be maintained by a first mover. 


The research suggests that durable advantage comes not from deploying a technology first, but from what a firm builds on top of that deployment:


Without those, competitors do, as a general pattern, catch up, and often faster with each passing decade.

Saturday, April 4, 2026

Sometimes "New Technology" is Not Better than the Existing Alternatives

The “heads up” display on a car’s windshield is quite the novelty. Whether it is useful is not so much the issue. Whether it is usable is the issue. 


If you are wearing polarized sunglasses while driving, the display is blocked. So now you have to make a technology decision: wear only non-polarized sunglasses; have two types of sunglasses or simply realize you aren’t going to be using the heads-up display. 


Interface

Strength

Weakness

Smartphone

Full control, rich apps

Requires attention + hands

Smart glasses

Hands-free, contextual AR

Limited display, battery

HUD (cars, aviation, AR overlays)

Immediate situational awareness

Narrow information bandwidth

Audio (voice assistants, earbuds)

Zero visual load, ambient

Low precision, ambiguity


Sometimes the cool and novel technology might not be transparent or easy to use, though designed that way. 

HUDs face calibration issues, glare in sunlight, and the tendency to clutter your field of vision rather than simplify it.


Gesture controls (BMW, Volvo) require precise, unnatural hand movements; accidental triggers are common; much slower than just pressing a button.


Voice assistants in cars often struggle with accents, road noise, and complex commands and might require you to memorize specific phrasing. 


Smartwatch notifications are easy but replying on a tiny screen, managing apps or navigating menus are clunky.


Smart glasses (Google Glass, Ray-Ban Meta) might be socially awkward to use in public. It can be hard to view the display in bright light and voice commands might feel unnatural. 


Interacting with a fitness tracker with a small display can be challenging. 


Touch-panel light switches that replace a simple tactile switch with a glass panel that you have to look at and press just right can be more work than the original interface.


Smart locks with keypads or apps can mean fumbling with your phone in the dark or cold and is arguably worse than using a key. 


Voice-controlled TVs can be convenient, but it might still be easier to just pick up the remote. 


And some audiovisual enhancements just never seem to catch on, such as 3D TVs or spatial audio headphones with head tracking. The effect is impressive for some  minutes, but then the constant recalibration and lag can become annoying. 


The point is, there typically are multiple ways to satisfy some need, and not all the ways are equally compelling, all the time. 


Use Case / Problem

Smartphone (handheld apps)

Smart Glasses (AR / wearable)

HUD (fixed display, e.g., car windshield)

Audio Interface (voice / earbuds)

Key Tradeoff

Navigation / directions

Map app, turn-by-turn directions

Directions overlaid in field of view

Turn arrows projected on windshield

Spoken directions only

Visual vs. distraction vs. convenience

Messaging / communication

Typing, reading full threads

Glanceable notifications, voice reply

Minimal alerts (e.g., incoming call)

Dictation + read-aloud messages

Precision vs. speed

Translation / language help

App-based translation (camera or text)

Real-time subtitles in view

Rare / limited

Real-time spoken translation

Latency vs. immersion (Alibaba)

Photography / recording

Manual capture via camera

First-person, hands-free capture

Not typical

Voice-triggered capture (via phone)

Control vs. immediacy

Work instructions (field work, repair)

Manuals, videos, checklists

Step-by-step AR overlays on real objects

Industrial HUDs for critical info

Audio instructions

Hands-free advantage is decisive (MDPI)

Fitness / health tracking

Apps + wearable sync

Real-time biometrics in view

Heads-up metrics (cycling, driving)

Spoken coaching feedback

Attention vs. safety

Search / information lookup

Browser or app search

Contextual info about what you see

Limited contextual prompts

Voice queries + answers

Speed vs. depth

Entertainment / media

Video, games, social media

Private AR screens / lightweight viewing

Minimal (music info, etc.)

Music, podcasts

Immersion vs. mobility

Notifications / alerts

Full notification center

Peripheral, glanceable alerts

Critical alerts only

Spoken alerts

Cognitive load management

Meetings / collaboration

Video calls, chat apps

AR annotations, shared view

Limited

Voice-only participation

Richness vs. friction

Accessibility (vision/hearing)

Accessibility apps

Real-time captions, object recognition

Limited

Screen readers, voice control

Continuous assistance advantage (MDPI)

Shopping / product info

Apps, scanning barcodes

Overlay product info in-store

Rare

Voice search

Contextual relevance

Driving / safety

Phone navigation (unsafe to handle)

Experimental (not widely used)

Core use case (speed, nav, alerts)

Voice navigation

Safety-critical context favors HUD

Daily task management

Calendars, reminders

Subtle prompts in field of view

Minimal

Voice reminders

Interrupt vs. ambient nudges


What changes is how and when you access those capabilities using hands, eyes, voice, or passive display. 


Smartphones arguably remain the “general-purpose hub,” while smart glasses, heads-up displays (HUDs), and audio interfaces specialize in reducing friction in specific contexts (hands-free work, real-time awareness, ambient computing).


The point is that the usefulness of any approach is rarely limited to one mode, one device or physical solution, and some “advanced” solutions do not always provide a better user experience.


Gemma 4 is Designed to Run on Edge Devices Such as Smartphones, Using Apache 2.0 License

Gemma 4 , Google’s latest open source artificial intelligence model, is probably important for several reasons. For starters, it uses an Ap...