Moral (Mis)Behavior in the Era of Big Data

Projekt Start:06/2020
Status:Beendet
Forscher:Victor Klockmann, Marie Claire Villeval, Marie-Claire Villeval, Alicia von Schenk
Bereich: Household Finance, Experiment Center
Finanziert von:SAFE

Topics & Objectives

With big data, decisions made by machine learning algorithms depend on training data generated by many individuals. In various situations such as when judges employ decision support systems that predict recidivism or are used for bail decisions, when financial companies use robots to advise investors on their portfolio composition, or when human resources systems develop the use of intelligent automation, the data partly stems from humans and their behavior. From an ethical perspective, this means that individuals are responsible for the data they generate for the training of artificial intelligence (AI), and that the more they behave in a polarized manner themselves, for example, the more likely the future decisions made by algorithms will also exhibit this polarization. However, the feeling of individual responsibility in this training may be more or less diffuse, depending notably on pivot individual's decisions in generating training data for the AI.

In an online experiment, our goal was to identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. We studied to what extent varying pivotal common knowledge of individuals in the training of the algorithm affected prosocial human behavior. Across treatments, we manipulated the sources of training data and thus the impact of everyone’s decisions on the algorithm. In some treatments, the individual’s decisions were the exclusive source of training of the AI, while in others these decisions represented only half, one percent or none of the AI training data.

Key Findings

  • Reducing individual pivotality for algorithmic choices induced more selfish choices and weakened revealed prosocial preferences.
  • The above change did not result from a change in the structure of incentives, but Big Data offered an excuse for selfish behavior through lower responsibility for one’s and others’ fate.
  • Individuals who believed that the algorithm was trained by other selfish individuals behaved more selfishly themselves when generating training data, but only when they were to some extent pivotal for others’ payoffs.

Policy Implications

Our findings suggest a need for attributing explicit and salient individual responsibility to those affecting algorithmic predictions. Especially when algorithms are trained with a multiplicity of data sources, the changes in behavior towards more selfish decisions bear the risk of biased decisions of algorithms. Like for elections where every vote counts although each voter may feel powerless at the individual level, it is certainly important that companies remind their employees that ethics matter for every decision that they make – even if these individual decisions will be combined with that of many other employees to train algorithms. However, this is certainly not sufficient to prevent the development of “selfish algorithms”, which is why we also suggest to incorporate general ethical principles in the design of these programs.

In fact, with our findings we reopen a larger debate in the field of AI ethics about whether computer scientists should make the training datasets for intelligent systems more diverse and perhaps even create “idealized” data, as Eric Horvitz, a technical fellow at Microsoft, suggested. Or should datasets reflect those human biases that exist in the real world? Should developers build affirmative action into their algorithms or should they create “blind” algorithms that mirror human decision-making and (social) preferences, even if this potentially increases inequality? “Human in the loop” interventions should certainly be involved to validate models, check AI's decisions, and flag harmful consequences.

Zugehörige Working Papers

Nr.Forscher/innenTitelJahrBereichKeywords
336Victor Klockmann, Marie Claire Villeval, Alicia von SchenkArtificial Intelligence, Ethics, and Pivotality2022 Household Finance, Experiment Center Artificial Intelligence, Big Data, Pivotality, Ethics, Experiment
335Victor Klockmann, Marie Claire Villeval, Alicia von SchenkArtificial Intelligence, Ethics, and Intergenerational Responsibility2022 Household Finance, Experiment Center
Zurück