It is reasonable to suggest that, at the moment, agentic artificial intelligence is not yet ready to displace many full human jobs. Hopes are higher (or more worrisome, depending on one’s point of view) for artificial general intelligence.
The equally far-reaching implications, though, might happen if artificial general intelligence does acquire such capabilities. For as hard as it might be to imagine a world where nearly all essential work can be done by the “compute,” the economic ramifications would be stunning and unprecedented.
“Before AGI, human skill was the main driver of output, and wages reflected the scarcity of skills needed for bottleneck tasks,” says Pascual Restrepo, author of the paper “We Won’t be Missed: Work and Growth in the AGI World,” published by the National Bureau of Economic Research.
Consider the potential impact on jobs, wages and sources of value. “In an AGI world, compute takes that central role, and wages are anchored to the computing cost of replicating human skill,” he argues. “While human wages remain positive, and on average exceed those in the pre-AGI world, their value becomes decoupled from GDP, the labor share converges to zero, and most income eventually accrues to compute.”
There are some caveats.
AGI assumes we can replicate what people do if we throw in enough compute at the tasks. That does not mean it is practical or efficient to automate everything.
Depending on the computing costs 𝛼(𝜔), it may be better to leave some tasks to humans and
allocate our finite computational resources elsewhere.
Also, some work requires interacting physically with the world. AGI optimists assume that, when needed, and if economically rational, computer systems can control machines and hardware to accomplish this work.
Some work requires empathy and social interaction and, it is argued, must be carried out by humans. The “human touch” and “empathy” of a therapist or healthcare provider may be impossible to replicate, creating a premium for work completed by people.
The issue is whether we can substitute so much compute that the alternative is really between a human and an AI system that “perfectly emulates the best therapists in the world (from a functional point
of view).”
Assuming we can afford to do so, one might rationally argue there are some, or many, instances where the AI is an acceptable substitute.
One must also assume that compute capabilities and costs continue to scale over time on something like the Moore’s Law rate.
All that noted, we might still argue that even if some work can be automated, it might not be. There will of course be a cost for using AGI. And if the costs are significant enough, and the tasks being considered for AI substitution can be handled by humans at equivalent or lower cost, then using AGI will not make sense.
Hospitality, live performance or entertainment might provide examples.
Also, AGI compute might be a scarce resource. If so, then normal cost-benefit logic should hold:AGI replaces human labor when it makes economic sense to do so.
A new theory of value might include the idea that human labor is worth what it saves in compute costs, Restrepo suggests. But algorithmic progress, which arguably is less linear than “compute infrastructure,” should also be an issue, as uncertainty introduces volatility.
The social implications are huge. In an AGI economy, most income accrues to owners of compute. How society manages such a transition, in terms of impact on social inequality, is unclear.
As Restrepo says, “today, if half of us stopped working, the economy would collapse.” That might not be true in a future where AGI can be economically deployed to displace humans in economy-central roles.
All of which raises new issues around “abundance” that humans have not generally had to deal with in the past: what do people do when they do not actually ne