There are growing efforts to provide people with a right to explanation, i.e., informing them about the use and the nature of predictive algorithmic assessments affecting them. This strive towards more transparency raises several important questions. In this paper, we study whether disclosing algorithmic predictions can influence targets' perceptions about themselves and the social context in which they interact with others. We develop a novel experimental protocol that incrementally varies users' and targets' access to an algorithm's prediction that can help users to make an income-maximizing decision in a strategic setting under uncertainty. We find that the private disclosure of inaccurate algorithmic predictions to targets causally steers their behavior into the direction of the prediction. When targets additionally learn that a user has effectively rubberstamped a prediction at their own peril, they behave opportunistically at the expense of the user. We interpret these findings as evidence that algorithmic predictions can influence people's perceptions about what kind of person they are and how they ought to behave. As a consequence, algorithms may endogenously manipulate and contaminate targeted people's behaviors, leading to unintended and unexpected side effects. As it is only learning about inaccurate algorithmic outputs that invokes unintended ramifications, we argue that enhancing algorithmic transparency must go hand in hand with quality checks and continued monitoring of algorithmic systems.