Moral Decisions and the Externality of AI Usage

Project Start:06/2020
Status:Ongoing
Researchers:Victor Klockmann, Marie-Claire Villeval, Alicia von Schenk
Category: Household Finance, Experiment Center
Funded by:SAFE

"Humans shape the behavior of artificially intelligent (AI) algorithms. There are two mechanisms: active human input through direct development of AI systems, and training these systems receive through the passive observation of human behavior based on the data constantly generated. The choice of and the properties represented by a data set significantly influence the behavior of the algorithm (Rahwan et al., Nature 2019). Consider, for instance, AI that gives financial advice or takes investment decisions based on consumer behavior and observed preferences. The revealed willingness to accept risks or making unethical investments might decrease in the extent the AI learns from one's behavior, changing its decision making for successor investments. Given that one's decisions generate training data for an algorithm and create an externality on future decision making of intelligent systems, how does the use of AI affect the morality of human behavior? Is it possible to strengthen the awareness of responsibility by emphasizing the consequences of training on the well-being of future generations?

The project focuses on the intergenerational aspect of AI that stores and transmits not only data and ""knowledge"" but also ""preferences"". We emphasize the training and learning of AI compared to other technologies. In an online experiment (via oTree & Zoom instead of in the lab), participants decide repeatedly in a variation of the dictator game with positive or negative monetary consequences for a fellow player. These decisions train an algorithm, which then takes decisions itself. We manipulate the influence of the AI on the participant and on future sessions. We test whether an externality of today's behavior on future generations of participants - through machine learning - has an impact on the decisions of today's subjects (cf. Schotter & Sopher, JPE 2003, Exp. Econ. 2006, GEB 2007).

The treatments also vary the nature of the externality. While in one treatment participants only receive information about the externality of their training data, it also has monetary consequences for themselves in the other treatments. Participants in the first round of sessions are paid out money again on a second date. This amount depends on the decision of the algorithm (they also train themselves) for participants in a second round of sessions, i.e. for their successors in a ""future generation"". We also plan to vary whether this successor is in an advantageous or disadvantageous position in the dictator game, hereby referring to the influence of the mobility of societies.

We implemented the experiment with all its treatments and the machine learning algorithm in oTree. I wrote part of the instructions and all authors went through them several times. The infrastructure for online experiments is currently being set up."

Back