With Big Data, decisions made by machine learning algorithms depend on training data
generated by many individuals. In the ethical domain, how does this feature affect the
prosociality of the decisions that serve to train the AI? In an experiment in which we
manipulated the pivotality of individual human decisions used to train an artificially
intelligent algorithm, we show how the diffusion of responsibility weakened revealed
social preferences and led to the design of algorithmic models favoring selfish decisions.
This does not result from a change in the structure of incentives, and it is independent
from the existence of externalities. Rather, our results show that Big Data offers an
excuse for selfish behavior through lower responsibility for one’s and others’ fate.