Wednesday, August 26, 2020
The Allocation of Decision Authority to Human and Artificial Intelligence
The allocation of decision authority by a principal to either a human agent or an artificial intelligence is examined. The principal trades off an AI’s more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability, and adopt an AI that may be biased. Organizational design considerations are likely to have an impact on how AIs are trained.