Moral (Mis)Behavior in the Era of Big Data

Project Start:06/2020
Status:Ongoing
Researchers:Victor Klockmann, Marie-Claire Villeval, Alicia von Schenk
Category: Household Finance, Experiment Center
Funded by:SAFE

"Especially in times of Big Data and Machine Learning, the behavior and the way algorithms decide depends on training data generated by a large number of people. In an online experiment (via oTree and Zoom instead of a planned laboratory experiment) we want to identify the effect of being responsible for the decisions of an algorithm but ""not decisive"" on moral behavior. In finance, there are funds that make investments and portfolios that are constructed using Big Data. If one's decisions are only one source out of many for these, the inhibitions to bahave immoraly might be low. Do individuals actively develop moral strategies to avoid feeling responsible for their misbehavior and to minimize negative external effects via machine learning (BĂ©nabou et al., No. w24798. NBER, 2018)?

If one assumes that one's own decisions generate training data for an AI, how does the morality that these data reflect change when one changes the pivotal role of one's own behavior for AI decisions? How do people change their decisions when they train an artificially intelligent system not for themselves but for a third party? Does Big Data offer ""moral wiggle room"" to behave selfishly?

The treatments vary the pivotality of a participant's decisions (in a dictator game with an altruistic and a selfish option) for an algorithm that uses machine learning. Based on a series of decisions made by a participant in the role of dictator, the algorithm then chooses between two options, this choice being monetarily relevant for the participant. While in one treatment this decision is based only on the data generated by the participant herself, the algorithm's decision in another treatment adds additional data generated by participants in other sessions (""big data""). This means that the participant loses control and her own decisions are ""no longer decisive"" for the choice of algorithm. Feeling less responsible could lead to more selfish behavior. In an additional treatment, the decisions of the participants in the role of dictator train an algorithm that then makes a monetarily relevant decision for another team in their session. Since these participants no longer have control over the algorithm's decision in their team, but only over the decision in another team, this reflects the moral behavior that people expect from AI systems when making decisions for third parties.

We have already implemented the experiment with all its treatments together with the machine learning algorithm in oTree. We created the instructions together and all authors went through them several times. The infrastructure for online experiments is currently being set up.

"

Back