Monday, March 2, 2026

Even if 70% of AI Projects Fail, AI Will Not

AI projects often fail not because the technology doesn’t work, but because the incentives inside the institution are misaligned.


And yet, despite all that, failure might not even matter. Most information technology initiatives (up to 70 percent) seem to fail, and none of that diminishes the ultimate value.


And there are lots of institutional behaviors that might help explain risk aversion, not only IT project failure.


When risk is localized and visible while benefits are diffuse and hard to attribute, rational managers avoid experimentation even when the enterprise would benefit.


If an AI pilot fails, the sponsoring department (IT, operations, marketing) is clearly identifiable.


If it succeeds, gains often show up in ways that do not proportionally reward taking the risk:

  • company-wide cost reductions

  • improved customer experience

  • better decision quality

  • long-term competitiveness. 


Also, what is hard to measure is hard to reward. But failure or investment arguably often is measurable:


Successes, on the other hand, often are ambiguous or show benefit only at the “whole enterprise” level:

  • “better decisions”

  • “higher productivity”

  • “reduced churn risk.”


Also, employee and manager rewards tend to be short term oriented, while AI improvements tend to be long-term.


Innovation

Who Bears the Risk

Who Receives the Benefit

How Rewards Are Structured

Likely Behavior

AI customer service chatbot

Customer support leadership

Entire company (lower costs, better customer experience)

Support measured on call resolution, customer satisfaction

Avoid deployment if early errors hurt scores

Predictive maintenance AI

Plant operations

Enterprise (reduced downtime, capex deferral)

Ops measured on uptime, safety

Resist pilot that might trigger false alarms

AI sales forecasting

Sales ops team

Finance, supply chain, executive planning

Sales measured on quota attainment

Ignore tool that might challenge forecasts

AI-assisted coding tools

IT budget owner

All product teams

IT measured on cost control & system stability

Delay rollout due to security concerns

AI fraud detection

Risk/compliance team

Entire firm (loss reduction)

Compliance measured on zero failures

Avoid model with false positives risk

AI hiring screening

HR

All departments (better hires)

HR measured on time-to-fill and legal risk

Reject tool due to bias concerns

AI pricing optimization

Pricing team

Company-wide margin gains

Team measured on revenue stability

Avoid experimentation that could reduce short-term revenue

AI knowledge management

IT, knowledge management  team

Entire workforce

KM measured on system uptime and adoption

Avoid radical change to workflows


The contrast between “fear” and “greed” explains quite a lot of human behavior, in personal life or in business. We might say the dichotomy is between expectations about “downside” and “upside” as well as the consequences of either happening. 


If the impact or results of a negative realized outcome happen right away, and are quantifiable, while the results of a positive outcome are delayed, intangible in the moment or hard to quantify, then risks might generally exceed rewards in many instances. 


All of which might help explain why artificial intelligence adoption seems to produce fewer immediate gains than we might expect. 


For individual decision-makers in business, downside risk from a model-driven decision, such as credit losses, customer complaints, or operational disruptions, is immediate and personally attributable. We know quickly what we have lost. 


Upside value, by contrast, is probabilistic, delayed, and diffused across the enterprise.


Rational actors therefore discount model recommendations. 


To increase the perceived margin of safety, adoption thresholds rise, discretionary review layers proliferate, and precedent asserts itself when change is called for. 


This behavior is often misinterpreted as cultural resistance or distrust in AI, but it reflects rational responses to incentive structures that punish mistakes more visibly and directly than rewarding successes.


In fact, where AI is able to better quantify risk, it might actually encourage inaction, especially if it is possible to surface specific entities, departments or functions where losses could occur, while benefits might accrue more broadly, and be harder to identify. 


This is analogous to the problem of cutting any government spending: the losers are easily identifiable (people who lose jobs) while the winners (taxpayers) are diffuse and anonymous. 


So AI return on investment results are the predictable outcome of deploying probabilistic technologies into institutions optimized for deterministic control, loss avoidance, and fragmented accountability


In other words, AI might create, highlight or accentuate a greater degree of uncertainty; allocate potential losses in clear ways while producing benefits that are more diffuse. 


The example is AI quantifying costs in specific departments, run by specific individuals, while producing enterprise-level profits or revenues that are not clearly attributable to any specific department or person. 


The risks of failure are clear and personal; the benefits non-attributable in the same way. 


So to increase the chances that AI adoption will work, entities can change incentives:

  • Share credit across departments

  • Tie success metrics to enterprise outcomes, not just local KPIs.

  • Create “safe-to-fail” zones (innovation budgets and sandboxes reduce career risk)

  • Centralize funding for cross-functional AI to avoid penalizing the initiating department

  • Reward experimentation, not just outcomes

  • Attribute value explicitly (use internal accounting to show which teams enabled enterprise gains).

Yawning about 6G?

Most people outside the communications industry will be unaffected and largely unconcerned about Mobile World Congress happening. But even many who are in the computing and software industries might not have to be too interested.


Yes, every next-generation mobile platform has featured higher capacity (speeds, bandwidth) and lower latency. Sometimes that really makes a difference, enabling new and compelling features. Text messaging doesn’t take lots of bandwidth, but it arguably was a “killer app” for 2G. Steaming video enabled by 4G networks might be in the same category.


Most observers might agree that proposed 3G apps actually did not emerge until the time of 4G. But most assessments of new killer features or apps since 4G have yet to emerge. 


We might note that the hoped-for advances often happen only in every other generation of networks, as hyped apps for any particular generation take longer to be commercialized than was hoped. 


That has led to some thinking that “every other generation” of mobile platforms is consequential (in terms of killer features and apps). So 2G and 4G were more consequential, 3G and 5G less so, with perhaps some expectation that 6G could be the platform that is more important than 5G. 


Platform

Theoretical Peak

Real-World Speed

Latency

The Pitch

What Actually Mattered

Verdict

2G

0.3 Mbps (EDGE: 384 Kbps)

0.05–0.1 Mbps

300–1000 ms

"Digital" wireless; wireless internet on your phone

SMS texting — arguably the most transformative app in mobile history. Basic WAP browsing (barely usable). MMS.

Genuine Leap

3G

7.2–21 Mbps (HSPA+)

0.5–3 Mbps

100–500 ms

"Mobile broadband" — internet everywhere, video calling

App stores became viable. Google Maps (basic). Email on the go. Social media feeds (early Twitter/Facebook). The original iPhone ran on 2.5G/3G.

Partial Win

4G LTE

150–1000 Mbps

10–50 Mbps

30–70 ms

"True broadband speeds" — replace home internet, HD video everywhere

Streaming video (Netflix, YouTube) became genuinely good. Uber/rideshare apps. Instagram, Snapchat, TikTok. Video calls (FaceTime, Zoom). Hotspot as home broadband backup.

Genuine Leap

5G (sub-6 GHz)

1–10 Gbps

50–300 Mbps

10–30 ms

"Connected everything" — AR, autonomous vehicles, smart cities, remote surgery

Faster hotspot. Marginally better congestion in stadiums/airports. No killer app has emerged for consumers after 5+ years.

Mostly Hype

5G mmWave

4–20 Gbps

1–3 Gbps (indoors: near zero)

1–5 ms

"Gigabit wireless" — fixed wireless broadband replacement

Fixed wireless home broadband in specific markets. Dense venues. Not useful for mobile users — signal doesn't penetrate walls or travel more than a few hundred feet.

Mostly Hype

6G (proposal)

1 Tbps

???

<1 ms (theoretical)

"Holographic communication," digital twins, connected senses, brain-machine interfaces

Unknown. Researchers candidly admit there is no identified 6G killer app. 

Mostly Hype


But we might also be at a point where speeds and feeds simply matter less, as the value of the mobile access connection is less driven by bandwidth and latency, and more by device and app capabilities. 


Perhaps it always is true that the value is driven by “what the platform enables” rather than “bandwidth” the platform supports. That has been true, arguably, for decades, as broadband internet access has gotten better.  


But we might also be at a point where, generally speaking, the networks support “more than enough” bandwidth, and “better than required” latency for most useful consumer or business use cases.


Even if 70% of AI Projects Fail, AI Will Not

AI projects often fail not because the technology doesn’t work, but because the incentives inside the institution are misaligned. And yet, d...