Monday, March 9, 2026

Will Generative AI Degrade Cognitive Skills?

Does use of language models degrade cognitive thinking skills? Some studies might suggest this happens


Researchers found that using language models for code generation can degrade some human coding skills. The effect might happen elsewhere as well.  


AI also can also shift where real mastery sits, moving value away from syntax and toward design, debugging, and systems thinking.


Some might point to prior innovations having similar impact:

  • Studies on calculators show frequent reliance can weaken basic arithmetic and number sense, even as it improves performance on brute calculation tasks.

  • Similar work on AI tools finds that heavy users tend to offload thinking to the tool, which correlates with weaker independent reasoning and critical thinking.

  • In a controlled coding study, developers allowed to use AI scored significantly lower on conceptual questions about the library they were using, suggesting inhibited skill formation when too much is offloaded.​


So what impact might AI have on programmers?

  • Routine skills: remembering APIs, writing idiomatic boilerplate, and simple algorithmic patterns are prime candidates for decay, analogous to mental arithmetic.

  • Conceptual depth: if developers mostly paste and run code, they get fewer reps in reading, tracing, and understanding unfamiliar code paths, which are key to deep fluency.​

  • Error tolerance and debugging: non‑AI users in experiments made more errors but improved debugging skills by fixing them; AI users avoided some errors but learned less from the process.​


So value can shift:

  • Problem framing and specification: the scarce skill becomes formulating precise requirements and constraints that drive the generator toward useful solutions. This parallels how good calculator use depends on setting up the right equation.

  • Code review and validation: human experts may specialize in reading AI‑generated code for security, correctness, and architecture, rather than writing every line themselves.

  • System design and abstraction: as low-level implementation is automated, comparative advantage grows in designing architectures, protocols, data models, and failure modes.

  • Meta‑skills around AI: mastery can shift into knowing when not to offload, how to structure prompts, how to test and monitor generated code, and how to integrate these tools into a development process without hollowing out the team’s competence.


Lessons from calculators and education:

  • Calculator research suggests the impact depends heavily on how the tool is integrated: thoughtful use can support higher‑order problem‑solving, but uncritical dependence weakens fundamentals.

  • The same pattern appears with AI coding tools: developers who actively interrogate and adapt AI output retain more skills than those who passively accept suggestions.​

  • Educational responses emphasize sequencing: first build core skills, then introduce tools in ways that force learners to choose when to offload and when to compute or reason themselves.


The bottom line is that some programming skills (though possibly not critical thinking) could diminish as AI is used to generate code.


But we might also note that students in many cases are no longer are taught cursive writing, either, so that skill has atrophied. Is that, on balance, a negative thing or simply a change? 


For many working programmers, syntax‑level fluency may atrophy, much as long‑division skills did for most adults, while higher‑level engineering judgment becomes the main skill shift.


A smaller group might deliberately maintain low‑level mastery (in critical infrastructure, compilers, verification) much as some mathematicians maintain strong manual skills, because they need detailed internal models to catch AI’s failures.


Tuesday, March 3, 2026

Do You Need an AI PC?

Can general-purpose language models run locally on AI PCs? Yes, in “small language model” form. Will more of that be happening in the future? Yes. Is that capability generally useful for most PC users today? No. 


But the trajectory is clearly in that direction, essentially shifting more capabilities from “cloud access” to “onboard” processing over time.


The small language model landscape in 2026 has three practical buckets: 

  • ultra-compact models (500M–2B parameters) that run on smartphone processors with 1–4GB RAM

  • compact models (2B–5B parameters) that handle complex reasoning and coding on consumer hardware

  • larger efficient models approaching frontier capability.


Most laptop users will not find local processing a huge help for most use cases. It remains unclear how much value local live translation; autocomplete for text; email summarization; note taking or voice assistants provide.


Creative professionals arguably might see the most tangible gains right now:

  • Adobe Photoshop uses the NPU for Generative Fill, intelligent selection, and automatic retouching

  • Adobe Premiere Pro's AI features leverage NPUs for scene detection, auto-reframe, and speech-to-text. A 10-minute 4K timeline that previously required 8 minutes for AI analysis now completes in 2 minutes on NPU-equipped systems, while the GPU remains free for color grading. OrdinaryTech

  • Adobe’s Lightroom Classic uses the NPU for AI-assisted noise reduction in RAW files, and Capture One benefits for automatic cropping and look equalization across large batches of images.


Over time, more-complex tasks could shift on onboard, though. Document creation or some code generation seem likely examples. Gaming and some business productivity use cases also seem likely to benefit.

Over time, more-complex tasks could shift on onboard, though. Document creation or some code generation seem likely examples. 


But tasks requiring real-time world knowledge, frontier-scale reasoning or large model access will remain cloud based.  Battery life issues might push users to continue using remote solutions, evenif local processing is possible. 


The sweet spot for AI PCs over the next few years might be privacy-sensitive, latency-critical or frequently-repeated tasks that have the same sort of economics as any “local hardware versus remote service” tradeoff would feature.


Use Case

Mode

Reason

Example Models/Apps

Live captions & transcription

✅ Fully Local

Latency-critical; real-time audio can't tolerate cloud round-trips

Windows Live Captions, Whisper on-device

Real-time translation

✅ Fully Local

Sub-20ms latency required; privacy sensitive

Whisper, Seamless M4T

Writing autocomplete / suggestions

✅ Fully Local

Keystroke-level latency; personal content stays private

Copilot+ on Windows, Apple Intelligence

Smart email summarization

✅ Fully Local

Personal/sensitive data; short-form task suits SLMs

Apple Mail AI, Outlook Copilot (local tier)

Voice assistant (personal queries)

✅ Fully Local

Privacy; always-on would be costly and slow over cloud

Siri (on-device), Google Gemini Nano

Photo organization & tagging

✅ Fully Local

Private media; classification is well within SLM range

Apple Photos, Google Photos on-device

AI-assisted note-taking

✅ Fully Local

Personal data; summarization is a strong SLM use case

Notion AI (local), Apple Notes

Offline coding assistant (completions)

✅ Fully Local

Works without internet; latency-sensitive

Copilot local mode, Continue.dev + Ollama

Document Q&A (personal files)

✅ Fully Local

Highly sensitive; RAG over local files suits 7B models

LlamaIndex + local model, Copilot+

Background noise removal (calls)

✅ Fully Local

Real-time signal processing; NPU-optimized

NVIDIA RTX Voice, Windows Studio Effects

Grammar/style checking

✅ Fully Local

Short context, low complexity — SLMs excel

Grammarly on-device tier

General-purpose chat (everyday Q&A)

🔀 Hybrid

Local handles simple queries; cloud escalates complex ones

Copilot+, Apple Intelligence with ChatGPT fallback

Coding assistant (complex tasks)

🔀 Hybrid

Boilerplate → local; architecture/debugging → cloud

GitHub Copilot, Cursor

Document creation & long-form writing

🔀 Hybrid

Drafting → local SLM; refinement/research → cloud

Microsoft 365 Copilot

Clinical note summarization

🔀 Hybrid

A hybrid setup with on-prem inference and cloud-based model monitoring has proven effective for real-time clinical inference with minimal latency while maintaining compliance oversight

Mistral 7B + cloud monitoring

Personal finance analysis

🔀 Hybrid

Sensitive data processed locally; market data fetched from cloud

Custom RAG setups

Semantic file/photo search

🔀 Hybrid

Indexing runs locally; fuzzy or cross-device search may use cloud

Windows Recall (when enabled), Spotlight AI

AI agents (personal tasks)

🔀 Hybrid

Better small open-weight models are enabling more capable local tool use and agentic workflows, but complex multi-step planning still benefits from frontier model access

Qwen-Agent, local MCP setups

Deep research & synthesis

☁️ Fully Cloud

Requires broad world knowledge, long context (1M+ tokens), web access

Claude, GPT-5, Gemini

Complex reasoning / math

☁️ Fully Cloud

Frontier reasoning models (chain-of-thought at scale) far exceed on-device capability

Claude Opus, o3, Gemini Deep Think

Multimodal generation (images/video)

☁️ Fully Cloud

Diffusion models for quality image/video generation require massive VRAM

Midjourney, Sora, Gemini

Real-time web knowledge

☁️ Fully Cloud

By definition requires live internet access and large retrieval systems

All search-augmented LLMs

Training / fine-tuning

☁️ Fully Cloud

Requires sustained GPU clusters; not feasible on consumer hardware

AWS, Azure, GCP

Enterprise-scale RAG (large corpora)

☁️ Fully Cloud

Document stores of millions of files require server infrastructure

Azure AI Search, Pinecone

High-stakes legal/medical decisions

☁️ Fully Cloud

Requires frontier accuracy, audit trails, compliance guarantees

Enterprise Claude, GPT-5

Multi-user / collaborative AI

☁️ Fully Cloud

Shared context across users/devices requires centralized state

Teams Copilot, Google Workspace AI

Real-time fraud detection (banking)

☁️ Fully Cloud

Needs global pattern recognition across millions of transactions

Cloud-hosted specialized models


As you might expect, AI PCs will sometimes add a bit of cost (maybe in the $100 to $150 range), but arguably not a meaningful amount over a four-year to five-year useful device life. 

Netflix Revenue Levers Now Subscriptions, Ads (One-Time Breakup Fee is Significant)

A Netflix acquisition of Warner Brothers Discovery was thought to be a way for Netflix to begin broadening its revenue base in merchandisin...