Saturday, July 26, 2014
Having spent a good part of my career focused on research ethics issues, I have read with interest the reports and fallout about Facebook’s “emotional contagion” experiment. It’s not often that you get an example of research gone awry that affects so many.
If you’ve missed this story, here’s the nutshell version: in 2012, Facebook conducted an experiment involving some 700,000 of its users to determine whether manipulating their newsfeeds to show items with more positive or negative words would affect their emotions. (Facebook is always sorting – via algorithms – users’ newsfeeds; this just used a different algorithm to do so.) The hypothesis was that if a user viewed more positive messages, their posts would contain more positive words and vice versa. The news feeds were manipulated for a week, and the study found an effect, although a small one. The study was published on June 17th in the Proceedings of the National Academy of Science.
Although I have not systematically evaluated the coverage, the response to news of the study seems to be more negative than positive. The study has been called, among other things, “sneaky” and “unsettling” and has resulted in an “outcry”. One of the researchers even called the study “creepy”. My students had not heard about the study, but, when asked, several were uncomfortable with the idea that they could have been participants without their knowledge. Even those that did not object to the concept study were suspicious of the company’s motives, and there were questions about the value of the study.
What is interesting is that what Facebook did was undoubtedly legal. The federal regulations governing human subjects research like this apply (45 CFR 46) only to research that is conducted or supported by the federal government, although institutions like universities and hospitals often agree to apply them to all research conducted at their institution. Thus, they would not apply to Facebook. (FDA human subjects regulations do apply to private companies who seek to get FDA approval for their drugs and devices.) The regulations probably applied to the two academic researchers who joined the Facebook employee on the paper. However, it appears they joined the project after the data was collected. Because none of the data was linked to individuals, their involvement in the study was determined to fall within one of the exemptions to the federal regulations.
Facebook has pointed out that users consent to such manipulations as part of its data use agreement. However, it is well known that users typically do not read such agreements with care, if at all. (This is a problem shared with research consents – which may not be understood, even if they are read.) The public reaction also suggests that users did not think that they had agreed to such manipulation.
In the end, Facebook apologized for the study. It remains to be seen whether it changes its policies about research conducted with user data. However, the public response to the study suggests the need to revisit the federal regulations governing human subjects research, which have not undergone substantial change since 1981. In 2011, the Department of Health and Human Services issued an advance notice of proposed rule-making (ANPRM) that included proposals to require consent in some situations that currently are exempted under the regulations and to extend the application of the regulations. While those efforts appear to have stalled, there are some indications their amendment remains on HHS’s regulatory agenda. If it does move forward, it should keep the public’s reaction to the Facebook study in mind to ensure we retain the public’s trust. Our ability to conduct research on a vast array of topics depends on it.