Monday, March 9, 2026

R.I.P. Danny Tarampi

Danny Tarampi, proprietor of Gunther Glass Surfboards, passed away recently. As a friend quipped recently, “none of us gets out of here alive.” 


His shop was in Northridge (Roscoe and Reseda Blvd.) near Cal State Northridge University. Danny shaped my favorite board of all time, a 9’6” longboard. I once bought a shortboard from him, partly in cash and partly with an eight-track tape player! 


I acquired the nickname “Gunther” from my surfing buddies Chuck and Bill. As I recall, Danny surfed Malibu, as we all did, but I think he also rode at Secos (Will Rodgers) and County Line. 


As virtually everyone says, he was a super nice guy, quick with a smile. He moved to Hawaii at some point after I left California, after his wife died, I am told. 

Danny Tarampi


Thanks, Danny. It was great to meet you. The last time I saw him he was paddling out at Malibu. Fitting.


Will Robotaxis be Cheaper than Human-Driver Ridesharing?

Lots of people predict that automated vehicles used to support robotaxis will be more affordable for customers than human drivers and ridesharing. 


In some cases, perhaps yes. In other cases, perhaps no. 


source: BCG 


The simple logic is that since human driver wages can account for 60 percent or more of ride-hailing operational costs, automated vehicle fleets could reduce supplier costs, and lead to lower consumer pricing. 


Projections from investment analyses suggest that at scale, robotaxi fares could drop to as low as $0.25-$0.50 per mile, undercutting the $0.70 per mile for personal car ownership and the $2 to $3 per mile for current UberX rides. 


Ark Invest certainly agrees with the thesis.  


 But, for the moment, it depends. 


In San Francisco, Waymo rides have averaged 31 percent to 41 percent more than comparable Uber or Lyft trips, though Tesla's early robotaxi offerings have come in cheaper at around $8.17 per ride on average. Other studies suggest the opposite. 


In principle, some scenarios seem to support the argument for lower autonomous vehicle costs, with the possibility that rider fares could be lower. 


Robotaxis can achieve cost advantages through automation's core efficiencies:


That might be true especially for:

  • High-density urban areas with strong demand: In cities like San Francisco or Austin, where rides are frequent and vehicles can minimize idle time

  • Long-distance or high-utilization trips: For routes over 10-20 miles, robotaxis avoid human limitations like breaks or shift changes, potentially reducing costs by 50% or more over time

  • Projections indicate profitability at $0.50 per mile within 4-5 years, making them cheaper than personal cars for families driving 10,000 miles annually (saving ~$5,000/year)

  • Subsidized rollouts. Companies like Tesla are initially undercutting competitors with aggressive pricing to gain market share, similar to Uber's early strategies

  • Electrification and scale economies. Fully electric fleets reduce fuel costs dramatically, and as adoption grows (potentially doubling global miles traveled by 2030), per-ride overheads like insurance and maintenance dilute. McKinsey estimates a 50-percent drop in per-mile costs by 2030 in these optimized setups.


In other cases, the opposite might happen, as human driver services have advantages over robotaxis:

  • Early deployment or low-demand Areas. In nascent markets or suburban/rural zones with sparse rides, vehicles sit idle more, spreading fixed costs (e.g., $0.30-$0.50 per mile for operations) over fewer trips. Waymo's San Francisco rides average $20.43 vs. $14-15 for Lyft/Uber, a premium driven by expensive hardware and limited scale. During rush hours, inefficiencies like cautious driving add $9-11 extra compared to human services.

  • Short trips or inefficient routing. For distances under two miles, robotaxis can charge disproportionately more per kilometer due to minimum fares, detours for safety, or slower responses to traffic.

  • Premium or safety-focused services. Some riders pay more for the novelty or perceived safety.  In areas with bad weather, complex traffic, or high accident risks, added insurance and maintenance could keep fares elevated. Regulatory requirements for human oversight (e.g., remote monitoring) also add labor costs, keeping robotaxis pricier than unsupervised human drives in the near term

  • Monopolistic or regulated markets. If a single provider dominates, they might price at $0.50 per mile for higher margins rather than passing all savings to riders. Local regulations or other offsetting forces, such as strong union opposition, also might have an effect. 


So potential prices for riders might vary: higher for some use cases; lower for others. 


Will Generative AI Degrade Cognitive Skills?

Does use of language models degrade cognitive thinking skills? Some studies might suggest this happens


Researchers found that using language models for code generation can degrade some human coding skills. The effect might happen elsewhere as well.  


AI also can also shift where real mastery sits, moving value away from syntax and toward design, debugging, and systems thinking.


Some might point to prior innovations having similar impact:

  • Studies on calculators show frequent reliance can weaken basic arithmetic and number sense, even as it improves performance on brute calculation tasks.

  • Similar work on AI tools finds that heavy users tend to offload thinking to the tool, which correlates with weaker independent reasoning and critical thinking.

  • In a controlled coding study, developers allowed to use AI scored significantly lower on conceptual questions about the library they were using, suggesting inhibited skill formation when too much is offloaded.​


So what impact might AI have on programmers?

  • Routine skills: remembering APIs, writing idiomatic boilerplate, and simple algorithmic patterns are prime candidates for decay, analogous to mental arithmetic.

  • Conceptual depth: if developers mostly paste and run code, they get fewer reps in reading, tracing, and understanding unfamiliar code paths, which are key to deep fluency.​

  • Error tolerance and debugging: non‑AI users in experiments made more errors but improved debugging skills by fixing them; AI users avoided some errors but learned less from the process.​


So value can shift:

  • Problem framing and specification: the scarce skill becomes formulating precise requirements and constraints that drive the generator toward useful solutions. This parallels how good calculator use depends on setting up the right equation.

  • Code review and validation: human experts may specialize in reading AI‑generated code for security, correctness, and architecture, rather than writing every line themselves.

  • System design and abstraction: as low-level implementation is automated, comparative advantage grows in designing architectures, protocols, data models, and failure modes.

  • Meta‑skills around AI: mastery can shift into knowing when not to offload, how to structure prompts, how to test and monitor generated code, and how to integrate these tools into a development process without hollowing out the team’s competence.


Lessons from calculators and education:

  • Calculator research suggests the impact depends heavily on how the tool is integrated: thoughtful use can support higher‑order problem‑solving, but uncritical dependence weakens fundamentals.

  • The same pattern appears with AI coding tools: developers who actively interrogate and adapt AI output retain more skills than those who passively accept suggestions.​

  • Educational responses emphasize sequencing: first build core skills, then introduce tools in ways that force learners to choose when to offload and when to compute or reason themselves.


The bottom line is that some programming skills (though possibly not critical thinking) could diminish as AI is used to generate code.


But we might also note that students in many cases are no longer are taught cursive writing, either, so that skill has atrophied. Is that, on balance, a negative thing or simply a change? 


For many working programmers, syntax‑level fluency may atrophy, much as long‑division skills did for most adults, while higher‑level engineering judgment becomes the main skill shift.


A smaller group might deliberately maintain low‑level mastery (in critical infrastructure, compilers, verification) much as some mathematicians maintain strong manual skills, because they need detailed internal models to catch AI’s failures.


Tuesday, March 3, 2026

Do You Need an AI PC?

Can general-purpose language models run locally on AI PCs? Yes, in “small language model” form. Will more of that be happening in the future? Yes. Is that capability generally useful for most PC users today? No. 


But the trajectory is clearly in that direction, essentially shifting more capabilities from “cloud access” to “onboard” processing over time.


The small language model landscape in 2026 has three practical buckets: 

  • ultra-compact models (500M–2B parameters) that run on smartphone processors with 1–4GB RAM

  • compact models (2B–5B parameters) that handle complex reasoning and coding on consumer hardware

  • larger efficient models approaching frontier capability.


Most laptop users will not find local processing a huge help for most use cases. It remains unclear how much value local live translation; autocomplete for text; email summarization; note taking or voice assistants provide.


Creative professionals arguably might see the most tangible gains right now:

  • Adobe Photoshop uses the NPU for Generative Fill, intelligent selection, and automatic retouching

  • Adobe Premiere Pro's AI features leverage NPUs for scene detection, auto-reframe, and speech-to-text. A 10-minute 4K timeline that previously required 8 minutes for AI analysis now completes in 2 minutes on NPU-equipped systems, while the GPU remains free for color grading. OrdinaryTech

  • Adobe’s Lightroom Classic uses the NPU for AI-assisted noise reduction in RAW files, and Capture One benefits for automatic cropping and look equalization across large batches of images.


Over time, more-complex tasks could shift on onboard, though. Document creation or some code generation seem likely examples. Gaming and some business productivity use cases also seem likely to benefit.

Over time, more-complex tasks could shift on onboard, though. Document creation or some code generation seem likely examples. 


But tasks requiring real-time world knowledge, frontier-scale reasoning or large model access will remain cloud based.  Battery life issues might push users to continue using remote solutions, evenif local processing is possible. 


The sweet spot for AI PCs over the next few years might be privacy-sensitive, latency-critical or frequently-repeated tasks that have the same sort of economics as any “local hardware versus remote service” tradeoff would feature.


Use Case

Mode

Reason

Example Models/Apps

Live captions & transcription

✅ Fully Local

Latency-critical; real-time audio can't tolerate cloud round-trips

Windows Live Captions, Whisper on-device

Real-time translation

✅ Fully Local

Sub-20ms latency required; privacy sensitive

Whisper, Seamless M4T

Writing autocomplete / suggestions

✅ Fully Local

Keystroke-level latency; personal content stays private

Copilot+ on Windows, Apple Intelligence

Smart email summarization

✅ Fully Local

Personal/sensitive data; short-form task suits SLMs

Apple Mail AI, Outlook Copilot (local tier)

Voice assistant (personal queries)

✅ Fully Local

Privacy; always-on would be costly and slow over cloud

Siri (on-device), Google Gemini Nano

Photo organization & tagging

✅ Fully Local

Private media; classification is well within SLM range

Apple Photos, Google Photos on-device

AI-assisted note-taking

✅ Fully Local

Personal data; summarization is a strong SLM use case

Notion AI (local), Apple Notes

Offline coding assistant (completions)

✅ Fully Local

Works without internet; latency-sensitive

Copilot local mode, Continue.dev + Ollama

Document Q&A (personal files)

✅ Fully Local

Highly sensitive; RAG over local files suits 7B models

LlamaIndex + local model, Copilot+

Background noise removal (calls)

✅ Fully Local

Real-time signal processing; NPU-optimized

NVIDIA RTX Voice, Windows Studio Effects

Grammar/style checking

✅ Fully Local

Short context, low complexity — SLMs excel

Grammarly on-device tier

General-purpose chat (everyday Q&A)

🔀 Hybrid

Local handles simple queries; cloud escalates complex ones

Copilot+, Apple Intelligence with ChatGPT fallback

Coding assistant (complex tasks)

🔀 Hybrid

Boilerplate → local; architecture/debugging → cloud

GitHub Copilot, Cursor

Document creation & long-form writing

🔀 Hybrid

Drafting → local SLM; refinement/research → cloud

Microsoft 365 Copilot

Clinical note summarization

🔀 Hybrid

A hybrid setup with on-prem inference and cloud-based model monitoring has proven effective for real-time clinical inference with minimal latency while maintaining compliance oversight

Mistral 7B + cloud monitoring

Personal finance analysis

🔀 Hybrid

Sensitive data processed locally; market data fetched from cloud

Custom RAG setups

Semantic file/photo search

🔀 Hybrid

Indexing runs locally; fuzzy or cross-device search may use cloud

Windows Recall (when enabled), Spotlight AI

AI agents (personal tasks)

🔀 Hybrid

Better small open-weight models are enabling more capable local tool use and agentic workflows, but complex multi-step planning still benefits from frontier model access

Qwen-Agent, local MCP setups

Deep research & synthesis

☁️ Fully Cloud

Requires broad world knowledge, long context (1M+ tokens), web access

Claude, GPT-5, Gemini

Complex reasoning / math

☁️ Fully Cloud

Frontier reasoning models (chain-of-thought at scale) far exceed on-device capability

Claude Opus, o3, Gemini Deep Think

Multimodal generation (images/video)

☁️ Fully Cloud

Diffusion models for quality image/video generation require massive VRAM

Midjourney, Sora, Gemini

Real-time web knowledge

☁️ Fully Cloud

By definition requires live internet access and large retrieval systems

All search-augmented LLMs

Training / fine-tuning

☁️ Fully Cloud

Requires sustained GPU clusters; not feasible on consumer hardware

AWS, Azure, GCP

Enterprise-scale RAG (large corpora)

☁️ Fully Cloud

Document stores of millions of files require server infrastructure

Azure AI Search, Pinecone

High-stakes legal/medical decisions

☁️ Fully Cloud

Requires frontier accuracy, audit trails, compliance guarantees

Enterprise Claude, GPT-5

Multi-user / collaborative AI

☁️ Fully Cloud

Shared context across users/devices requires centralized state

Teams Copilot, Google Workspace AI

Real-time fraud detection (banking)

☁️ Fully Cloud

Needs global pattern recognition across millions of transactions

Cloud-hosted specialized models


As you might expect, AI PCs will sometimes add a bit of cost (maybe in the $100 to $150 range), but arguably not a meaningful amount over a four-year to five-year useful device life. 

R.I.P. Danny Tarampi

Danny Tarampi , proprietor of Gunther Glass Surfboards, passed away recently. As a friend quipped recently, “none of us gets out of here ali...