Monday, March 2, 2026

Even if 70% of AI Projects Fail, AI Will Not

AI projects often fail not because the technology doesn’t work, but because the incentives inside the institution are misaligned.


And yet, despite all that, failure might not even matter. Most information technology initiatives (up to 70 percent) seem to fail, and none of that diminishes the ultimate value.


And there are lots of institutional behaviors that might help explain risk aversion, not only IT project failure.


When risk is localized and visible while benefits are diffuse and hard to attribute, rational managers avoid experimentation even when the enterprise would benefit.


If an AI pilot fails, the sponsoring department (IT, operations, marketing) is clearly identifiable.


If it succeeds, gains often show up in ways that do not proportionally reward taking the risk:

  • company-wide cost reductions

  • improved customer experience

  • better decision quality

  • long-term competitiveness. 


Also, what is hard to measure is hard to reward. But failure or investment arguably often is measurable:


Successes, on the other hand, often are ambiguous or show benefit only at the “whole enterprise” level:

  • “better decisions”

  • “higher productivity”

  • “reduced churn risk.”


Also, employee and manager rewards tend to be short term oriented, while AI improvements tend to be long-term.


Innovation

Who Bears the Risk

Who Receives the Benefit

How Rewards Are Structured

Likely Behavior

AI customer service chatbot

Customer support leadership

Entire company (lower costs, better customer experience)

Support measured on call resolution, customer satisfaction

Avoid deployment if early errors hurt scores

Predictive maintenance AI

Plant operations

Enterprise (reduced downtime, capex deferral)

Ops measured on uptime, safety

Resist pilot that might trigger false alarms

AI sales forecasting

Sales ops team

Finance, supply chain, executive planning

Sales measured on quota attainment

Ignore tool that might challenge forecasts

AI-assisted coding tools

IT budget owner

All product teams

IT measured on cost control & system stability

Delay rollout due to security concerns

AI fraud detection

Risk/compliance team

Entire firm (loss reduction)

Compliance measured on zero failures

Avoid model with false positives risk

AI hiring screening

HR

All departments (better hires)

HR measured on time-to-fill and legal risk

Reject tool due to bias concerns

AI pricing optimization

Pricing team

Company-wide margin gains

Team measured on revenue stability

Avoid experimentation that could reduce short-term revenue

AI knowledge management

IT, knowledge management  team

Entire workforce

KM measured on system uptime and adoption

Avoid radical change to workflows


The contrast between “fear” and “greed” explains quite a lot of human behavior, in personal life or in business. We might say the dichotomy is between expectations about “downside” and “upside” as well as the consequences of either happening. 


If the impact or results of a negative realized outcome happen right away, and are quantifiable, while the results of a positive outcome are delayed, intangible in the moment or hard to quantify, then risks might generally exceed rewards in many instances. 


All of which might help explain why artificial intelligence adoption seems to produce fewer immediate gains than we might expect. 


For individual decision-makers in business, downside risk from a model-driven decision, such as credit losses, customer complaints, or operational disruptions, is immediate and personally attributable. We know quickly what we have lost. 


Upside value, by contrast, is probabilistic, delayed, and diffused across the enterprise.


Rational actors therefore discount model recommendations. 


To increase the perceived margin of safety, adoption thresholds rise, discretionary review layers proliferate, and precedent asserts itself when change is called for. 


This behavior is often misinterpreted as cultural resistance or distrust in AI, but it reflects rational responses to incentive structures that punish mistakes more visibly and directly than rewarding successes.


In fact, where AI is able to better quantify risk, it might actually encourage inaction, especially if it is possible to surface specific entities, departments or functions where losses could occur, while benefits might accrue more broadly, and be harder to identify. 


This is analogous to the problem of cutting any government spending: the losers are easily identifiable (people who lose jobs) while the winners (taxpayers) are diffuse and anonymous. 


So AI return on investment results are the predictable outcome of deploying probabilistic technologies into institutions optimized for deterministic control, loss avoidance, and fragmented accountability


In other words, AI might create, highlight or accentuate a greater degree of uncertainty; allocate potential losses in clear ways while producing benefits that are more diffuse. 


The example is AI quantifying costs in specific departments, run by specific individuals, while producing enterprise-level profits or revenues that are not clearly attributable to any specific department or person. 


The risks of failure are clear and personal; the benefits non-attributable in the same way. 


So to increase the chances that AI adoption will work, entities can change incentives:

  • Share credit across departments

  • Tie success metrics to enterprise outcomes, not just local KPIs.

  • Create “safe-to-fail” zones (innovation budgets and sandboxes reduce career risk)

  • Centralize funding for cross-functional AI to avoid penalizing the initiating department

  • Reward experimentation, not just outcomes

  • Attribute value explicitly (use internal accounting to show which teams enabled enterprise gains).

No comments:

Even if 70% of AI Projects Fail, AI Will Not

AI projects often fail not because the technology doesn’t work, but because the incentives inside the institution are misaligned. And yet, d...