Sunday, May 17, 2026

Ethical AI is Very Complicated

There are signs of anxiety about artificial intelligence that are well grounded but also “Luddite.” AI concerns do include a legitimate focus on job markets, fairness, and safety. 


Job automation, economic inequality, bias, privacy, deepfakes, loss of human agency, concentration of power and longer-term potential risks are rational concerns.


On the other hand, there also is some admixture of resistance or skepticism about a new technology that might be shaped, but hardly seems possible to “stop.”


The signs are obvious:


What separates the "good" use of any technology and the "evil" use of that same technology? 


The simple answer is that most technology is morally inert. Human Intention is what separates the impact. 


But there is a sense in which “intention alone” is insufficient:

  • Consequences matter independently of intent, which is why we have product liability laws

  • At least some technologies are not entirely “neutral”

    • landmines

    • social media algorithms optimized for engagement

  • Negligence (moral responsibility also extends to “what you should have foreseen”

  • Externalities (climate, opioid addiction)

  •  "Dual-use" (encryption; gain-of-function research) . 


So “intention is a “necessary but not sufficient” criteria for evaluating ethical implications. A fuller account could include:

  • Design (What uses does the technology structurally enable or constrain?)

  • Foreseeability (What harms were predictable?)

  • Who benefits and who bears the risks?

  • Systemic effects at scale.


So “intention” is the most important single factor in moral evaluation, but design, “affordances” (any property or feature of an object or environment that suggests and enables a specific action) and systemic effects generate moral responsibilities that exist independently of what anyone "meant."


Intent matters. But so do other consequences. The issue is how to create protections without weaponizing them (over-regulating; stifling; creating undue product liability laws).


No comments:

Ethical AI is Very Complicated

There are signs of anxiety about artificial intelligence that are well grounded but also “Luddite.” AI concerns do include a legitimate focu...