Thursday, February 5, 2026

Will AI "Eat Enterprise Software?"

If you are an investor in enterprise software, you are aware there is a fear that language models are going to disrupt the traditional enterprise software market and firms. And that fear seems to be playing out in equity prices.


At one level there is concern that the traditional pricing model (per-seat licenses) will be challenged.


At another level there is concern that increasingly-capable AI models will displace the need for many enterprise software functions. 


Investors are essentially moving from views that “software eats the world” (so invest) to “software is dead” (so stay away). Near-term turbulence is inevitable. 


But it also is possible to argue that long term, there will be more enterprise software, even as AI adoption accelerates. 


More to the point, though language models enable natural language interfaces, automate routine tasks and generating insights from vast datasets, they arguably cannot replace enterprise software. 


Enterprise systems are engineered for reliability, security, scalability, and regulatory compliance in high-stakes environments. Moreover, enterprises often deal with proprietary data silos, strict data privacy laws and mission-critical uptime that general-purpose models cannot easily replicate. 


Aspect

Role of Enterprise Software

Role of General-Purpose Models

Coexistence Example

User Interface and Interaction

Provides structured dashboards, forms, and workflows for consistent, role-based access.

Enables natural language querying and conversational interfaces for ad-hoc exploration.

Models integrated as chatbots within enterprise resource planning systems (querying inventory via plain English without navigating menus).

Data Management and Security

Handles secure storage, compliance (audit trails, encryption), and integration with legacy databases.

Analyzes unstructured data or generates summaries, but relies on external data feeds.

Enterprise tools feed sanitized data to s for insights, while maintaining control over sensitive information ( GDPR-compliant AI assistants in CRM).

Automation and Workflow

Executes rule-based, repeatable processes like approvals or batch processing with high reliability.

Automates creative or variable tasks, such as generating custom reports or code.

Models suggest workflow optimizations within HCM platforms, but the core execution remains in the enterprise system (auto-drafting performance reviews in Workday).

Analytics and Insights

Delivers predefined key performance indicators, business information tools, and real-time dashboards with deterministic accuracy.

Provides probabilistic predictions, trend spotting, or scenario modeling from natural language prompts.

Hybrid BI where enterprise software runs core analytics, and models enhance with exploratory queries ("What if" simulations in financial planning tools).

Customization and Scalability

Supports enterprise-grade customization via APIs, modules, and cloud scaling for thousands of users.

Offers flexible, on-demand generation but struggles with consistent scaling or versioning.

Models used to generate custom code snippets for enterprise integrations, deployed within the platform (auto-building plugins for Salesforce).

Compliance and Auditing

Ensures regulatory adherence with built-in logging, versioning, and certification

Lacks inherent auditability; outputs can be opaque or inconsistent.

Enterprise systems log  interactions as auditable events, using AI for efficiency while meeting standards (fraud detection in banking software).


It’s a bit analogous to the traditional choices between general-purpose and application-specific processing. Sometimes one makes more sense than the other, but both coexist. 


AI-enabled or AI-centric software is moving up the stack of what a product is. So consumer experiences of products include vastly more software content than in prior years.


Sometimes a general-purpose approach will suffice. But not always. ASICs still make sense as well. 


And AI will often allow software to become more capable, rather than replacing it, which is the common concern today. 


Domain experience, codified in enterprise software, arguably will be just as important tomorrow as it is today. 


But investors, at the moment, seem more focused on the near-term negative impact on enterprise software company fortunes. 


In some cases, that concern is exacerbated by huge new capital spending requirements for AI infrastructure. 


An adage suggests "markets can stay irrational longer than you can stay solvent." And that is the reality some investors might be facing in the near term.


AI is Like Writing, the Printing Press, Paper, Communications; Computing; the Internet; Smartphones; Social Media and Search

Artificial intelligence is the latest in a long pattern of improvements in knowledge technology that began with permanence (writing), added scale (printing), plus speed (telegraph/internet), then interactivity (social media), and now knowledge creation and understanding (AI). 


All these technologies fundamentally transformed how knowledge is created, stored, and distributed.


Writing meant knowledge could be transmitted across generations, with more permanency. The invention of paper reduced the cost of recording knowledge.


The printing press democratized knowledge by making books affordable and abundant. The telegraph enabled faster long distance sharing of information. The telephone did the same for voice communications. 


Radio and television added richer experiences. The personal computer democratized content creation. 


The internet (1990s) went further, enabling instant global information sharing and two-way communication. Social media democratized content creation; search removed information barriers; smartphones made knowledge retrieval ambient. 


AI now promises another leap: not just distributing existing knowledge, but helping generate, synthesize, and personalize it at scale. It's shifting from "access to information" to "access to reasoning and content creation."


Technology

Approximate Era

Impact on Knowledge Dissemination

Key Transformation

Writing Systems

3200 BCE onwards

Enabled knowledge to persist beyond human memory and oral tradition

From ephemeral to permanent knowledge

Paper

100 CE (China), 1100s (Europe)

Made writing materials cheap and portable compared to papyrus/parchment

Reduced cost of recording knowledge

Printing Press

1440s

Mass production of identical texts; standardization of knowledge

From scarce to abundant information

Telegraph

1830s-1840s

First technology to separate communication from physical transport

Real-time long-distance knowledge transfer

Telephone

1870s-1880s

Enabled direct voice communication across distances

Democratized real-time conversation

Radio

1920s (broadcast era)

One-to-many mass communication without literacy requirement

Audio knowledge broadcasting

Television

1950s (mass adoption)

Added visual dimension to mass communication

Visual learning and shared cultural experiences

Personal Computer

1970s-1980s

Put information processing power in individual hands

Democratized content creation and computation

Internet/World Wide Web

1990s

Global, instant, networked information sharing

From centralized to distributed knowledge

Search Engines

Late 1990s-2000s

Made vast internet information discoverable and accessible

From information access to information retrieval

Social Media

2000s

Enabled mass peer-to-peer knowledge sharing and collective intelligence

From consumption to participation

Smartphones

2007 onwards

Made internet access ubiquitous and mobile

Always-available knowledge in pocket

AI/LLMs

2020s

Automated knowledge synthesis, translation, and personalized explanation

From information access to reasoning assistance


We might argue that “AI is like the printing press” in terms of its ability to enable widespread and cheaper access to knowledge. But AI is also like other innovations that have enabled multi-generational knowledge permanence; speed of retrieval; cost of retrieval; and ability to create.


Wednesday, February 4, 2026

AI is Solow Paradox at Work

An analysis of 4,500 work-related artificial intelligence use cases suggests we are only in the very-early stages of applying AI at work and that most of the use cases have not yet moved to a stage where we can measure return on investment or productivity impact


That is worth keeping in mind. 


Most use cases so far only affect speed or time savings. Few use cases are more-directly integrated into customer-facing revenue-generating activities. 


The vast majority of use cases are very basic, says a Section AI report. Some 14 percent of workers say their most valuable AI use case is Google search replacement. As helpful as that might be, it is hard to measure productivity gains at this point. 


source: Section AI


About 17 percent of workers use AI for drafting, editing, and summarizing documents. Again, productivity improvements are difficult in those cases, but perhaps more measurable in terms of time savings. 


So far, Section AI researchers found only two percent of users have built automations for copy generation, which would save more time, for example. 


About three percent say their most valuable use case is data analysis or code generation, and there the ROI seems easiest to document in terms of time saved or effort avoided, rather than other revenue-generating metrics. 


source: Section AI


In fact, nearly a quarter of respondents say AI does not save them any time at all, which might seem odd unless those users are having to spend time learning how to use AI, which would, in fact, take more time. 


In other cases, they might find they are having to spend time checking the answers and output, which again might take additional time. 


The point is that we are in early stages of deployment, where it remains difficult to assess productivity gains. 


source: Section AI


As unhelpful as it might be, transformative technologies often fail to show up in productivity statistics for years, or even decades, after their introduction, as the Solow Paradox describes. 


Measuring language model impact by "minutes saved per task" captures only the shallowest layer of value, many would argue. The reason is that what we can measure sometimes is not all that important. 


Productivity metrics are generally designed to measure output per hour (quantity). They are notoriously bad at measuring quality. 


If a model helps a software engineer write safer, more robust code, or helps a marketer generate a campaign that resonates better with customers, standard productivity metrics might show zero gain (or even a loss.


Also, In the early stages of adoption, productivity often dips, since firms and workers must invest time and capital into training, restructuring workflows, and figuring out how to use the new tools. 


This "intangible capital" investment does not produce immediate revenue.


Also, as always, adopters are using language models to do existing tasks faster (writing emails). True productivity explosions only occur when businesses re-architect their entire workflows to do things that were previously impossible, rather than just speeding up legacy processes.


Innovation

Initial Era

The "Lag Phase"

Primary Reason for Lag

When Productivity Finally Spiked

Electric Power

Late 1880s (Electric motor introduced)

~30–40 Years

Factory owners swapped steam engines for electric motors without changing factory layouts.

1920s: When "unit drive" systems allowed for the assembly line and decentralized manufacturing.

Computers (IT)

1970s–80s (Mainframes & PCs)

~15–20 Years

The "Solow Paradox." Computers were used for isolated tasks (word processing) rather than networked data flow.

Mid-1990s: When the internet and enterprise software (ERP) enabled supply chain integration and instant communication.

The Internet

Early 1990s (World Wide Web)

~10–15 Years

The "Dot Com Bubble." Investment rushed in, but business models (e-commerce, cloud) were immature.

Late 2000s/2010s: When mobile internet, cloud computing, and smartphone adoption created the app economy.

Generative AI (language models)

2022–Present (ChatGPT moment)

Ongoing

Current focus is on "task replacement" (writing, coding) rather than "workflow redesign" (autonomous agents, new R&D methods).

Prediction (2027–2030+): Likely when AI moves from a "copilot" (assistant) to an "agent" that can autonomously execute complex, multi-step workflows.


That sort of measurable productivity gain cannot be demonstrated so soon. 


Will AI "Eat Enterprise Software?"

If you are an investor in enterprise software , you are aware there is a fear that language models are going to disrupt the traditional ent...