How online users react when companies run hidden experiments on them Companies are constantly running experiments on you without you knowing it. When you're reading your Facebook feed or surfing any website, what you see in front of you could be part of an experiment. In his paper "Critical Condition," ESADE Professor Uri Simonsohn and his coauthors Robert Mislavsky (Johns Hopkins University) and Berkeley Dietvorst (University of Chicago) reveal how people react when digital companies run hidden experiments on them. Apparently, sometimes they don't mind at all. Other times, they get really furious. Best of all, consumer responses are highly predictable. Knowing when they will be unhappy is quite easy. ESADE Knowledge: Your interest in this topic was triggered by a Facebook experiment that prompted a big negative response. What happened? Uri Simonsohn: A few years ago, Facebook wanted to test whether emotions were contagious on social media. They selected a number of users at random and started prioritizing on their feed negative rather than positive posts from friends to see how they would react. What happened? Users who had been exposed to sad emotions from their friends posted slightly more negative content themselves. The same thing happened for happy emotions: users seeing positive posts on their Facebook feeds started to post nicer content. How did users react when they found out about this experiment? Facebook was shaken by a huge backlash. It led to a popular outcry that questioned the moral limits of experiments. Many editorials and news programs talked about how companies shouldn't use consumers as "guinea pigs," how this was unethical behavior... Several companies that had been collaborating with academics who were running experiments even decided to stop doing so. But as a psychologist, I thought this seemed very odd. People don't usually morally object to harmless behavior Where does people's natural aversion to experiments come from? People don't usually morally object to harmless behavior unless there is a strong cultural or religious reason (e.g., eating a forbidden food). If nobody gets hurt, and there is no cultural tradition of objecting to experiments, why would people object to them? My conjecture was that people didn't actually object to experiments - that the experiment part was a red herring. People disliked what the experiment did to people. For example, people don't hate baskets in general. But if you put bad things in a basket, then people won't like the basket. People don't hate experiments in general. But if you put bad policies in an experiment, then people won't like the experiment. How did you go about testing this idea? We ran about 20 studies where we asked people either a) to evaluate how acceptable it is for a company to run an A/B test comparing policy A to policy B, or b) to evaluate how acceptable it is for a company to implement policy A or policy B. Straight up. No experiment. So we ran an A/B test on A/B testing... What we found very consistently across the many studies is that if policy A and policy B are acceptable as standalone policies, then an experiment about A versus B is deemed acceptable. But if either A or B is unacceptable, then the experiment is also deemed unacceptable. Can you give an example of that? Imagine a company that wanted its employees to go to the gym and was considering an incentive. If the incentive is a payment, that is acceptable - people don't mind getting money for doing things, so you can run an A/B test on the optimal incentive. For example, give some employees €5 if they go to the gym, and give others no incentive. That's okay. People will find that to be acceptable corporate behavior. But what if the incentive is a penalty? People don't think it is acceptable for their employer to charge them a fee if they don't go to the gym, so if you run an A/B test assessing the effectiveness of penalties, people will hate the experiment. But not really. They really just hate the penalty. In both cases, there is an experiment incentivizing employees to exercise more. One does not include unacceptable policies, so the experiment is okay. The other does include unacceptable policies, so the experiment is not okay. Can you tie that back to Facebook? Running an experiment was not what Facebook did wrong. What they did wrong was doing something to its users that they disliked: prioritizing sad stories in their feed.If, one morning, Facebook did that to every user without conducting an experiment, consumers would still have objected. Indeed, our research shows they would object to that even more than an A/B test where some randomly selected consumers get the sad stories. Why would the experiment be less bad? If I showed up in class one day and gave my students (a safe level of) electric shocks for no reason, this would obviously be perceived as wrong and unacceptable. But if it was as part of a controlled experiment, people may be a bit more understanding. Maybe something could be learned from it. Maybe it isn't just sadism. Users increase their levels of acceptance of unacceptable behavior if it is part of an experiment where you can learn something. What's the takeaway? If you are a company running A/B testing, don't be afraid to let your users know. They won't mind being "guinea pigs" in your experiments. Just don't harm them. Before you run the test, make sure all the policies you are testing would be accepted outside of an experiment. If you don't trust your intuition, you can survey your users to find out whether the policies would be acceptable. As long as the company follows this simple rule - "Don't include objectionable policies in your A/B tests" - people will be completely okay with them. "Experiment aversion" is a myth. Users have nothing against experiments that don't cause harm.

ESADE

Back to home

Why do people hate corporate experiments? (They don't)

07/2018

How online users react when companies run hidden experiments on them


Companies are constantly running experiments on you without you knowing it. When you're reading your Facebook feed or surfing any website, what you see in front of you could be part of an experiment.


In his paper "Critical Condition," ESADE Professor Uri Simonsohn and his coauthors Robert Mislavsky (Johns Hopkins University) and Berkeley Dietvorst (University of Chicago) reveal how people react when digital companies run hidden experiments on them. Apparently, sometimes they don't mind at all. Other times, they get really furious. Best of all, consumer responses are highly predictable. Knowing when they will be unhappy is quite easy.


ESADE Knowledge: Your interest in this topic was triggered by a Facebook experiment that prompted a big negative response. What happened?


Uri Simonsohn: A few years ago, Facebook wanted to test whether emotions were contagious on social media. They selected a number of users at random and started prioritizing on their feed negative rather than positive posts from friends to see how they would react.


What happened?


Users who had been exposed to sad emotions from their friends posted slightly more negative content themselves. The same thing happened for happy emotions: users seeing positive posts on their Facebook feeds started to post nicer content.


How did users react when they found out about this experiment?


Facebook was shaken by a huge backlash. It led to a popular outcry that questioned the moral limits of experiments. Many editorials and news programs talked about how companies shouldn't use consumers as "guinea pigs," how this was unethical behavior... Several companies that had been collaborating with academics who were running experiments even decided to stop doing so.


But as a psychologist, I thought this seemed very odd.


People don't usually morally object to harmless behavior


Where does people's natural aversion to experiments come from?


People don't usually morally object to harmless behavior unless there is a strong cultural or religious reason (e.g., eating a forbidden food). If nobody gets hurt, and there is no cultural tradition of objecting to experiments, why would people object to them?


My conjecture was that people didn't actually object to experiments - that the experiment part was a red herring. People disliked what the experiment did to people.


For example, people don't hate baskets in general. But if you put bad things in a basket, then people won't like the basket.


People don't hate experiments in general. But if you put bad policies in an experiment, then people won't like the experiment.


How did you go about testing this idea?


We ran about 20 studies where we asked people either a) to evaluate how acceptable it is for a company to run an A/B test comparing policy A to policy B, or b) to evaluate how acceptable it is for a company to implement policy A or policy B. Straight up. No experiment.


So we ran an A/B test on A/B testing...


What we found very consistently across the many studies is that if policy A and policy B are acceptable as standalone policies, then an experiment about A versus B is deemed acceptable. But if either A or B is unacceptable, then the experiment is also deemed unacceptable.


Can you give an example of that?


Imagine a company that wanted its employees to go to the gym and was considering an incentive. If the incentive is a payment, that is acceptable - people don't mind getting money for doing things, so you can run an A/B test on the optimal incentive. For example, give some employees €5 if they go to the gym, and give others no incentive. That's okay. People will find that to be acceptable corporate behavior.


But what if the incentive is a penalty? People don't think it is acceptable for their employer to charge them a fee if they don't go to the gym, so if you run an A/B test assessing the effectiveness of penalties, people will hate the experiment. But not really. They really just hate the penalty.

In both cases, there is an experiment incentivizing employees to exercise more. One does not include unacceptable policies, so the experiment is okay. The other does include unacceptable policies, so the experiment is not okay.


Can you tie that back to Facebook?


Running an experiment was not what Facebook did wrong. What they did wrong was doing something to its users that they disliked: prioritizing sad stories in their feed.


If, one morning, Facebook did that to every user without conducting an experiment, consumers would still have objected. Indeed, our research shows they would object to that even more than an A/B test where some randomly selected consumers get the sad stories.



Why would the experiment be less bad?


If I showed up in class one day and gave my students (a safe level of) electric shocks for no reason, this would obviously be perceived as wrong and unacceptable. But if it was as part of a controlled experiment, people may be a bit more understanding. Maybe something could be learned from it. Maybe it isn't just sadism. Users increase their levels of acceptance of unacceptable behavior if it is part of an experiment where you can learn something.


What's the takeaway?


If you are a company running A/B testing, don't be afraid to let your users know. They won't mind being "guinea pigs" in your experiments. Just don't harm them. Before you run the test, make sure all the policies you are testing would be accepted outside of an experiment. If you don't trust your intuition, you can survey your users to find out whether the policies would be acceptable. As long as the company follows this simple rule - "Don't include objectionable policies in your A/B tests" - people will be completely okay with them.


"Experiment aversion" is a myth. Users have nothing against experiments that don't cause harm.

More Knowledge
The marketing manager as an intuitive statistician
De Langhe, Bart
Journal of Marketing Behavior
Vol. 2, nº 2-3, 12/2016, p. 101 - 127
Put the customers' money where your mouth is
Bertini, Marco
In Dear CEO: 50 personal letters from the world's leading business thinkers
Londres (United Kingdom): Bloomsbury Publishing, 2017
p. 19 - 21
Strong reciprocity in consumer boycotts
Hahn, Tobias; Albert , Noël
Journal of Business Ethics
Vol. 145, nº 3, 10/2017, p. 509 - 524
Back to home