Thursday, July 3, 2014
About That Facebook User Manipulation Study
By now, most of you have probably heard about Facebook's study which tweaked users' news feeds to see whether doing so affected their moods. The study was aparently conducted in response to reports that some FB users were getting bummed out by reading all the wonderful things happening to their shiny friends. According to FB's study, this was not true. They found that happy news led to happier posts and negative news resulted in more negative posts, leading the researchers to conclude that moods were contagious. (My take on the results of that study below). Facebook claimed that users consented to being part of this experiment when they agreed to the company's terms of service. Public outrage ensued but even among the outraged, there was a consensus that Facebook "legally" had a right to do this because of the terms of use even if ethically, they should have refrained (or at least obtained active consent).
Such is the rhetorical power of a contract, even one that nobody reads.
I think it's at least questionable whether Facebook's terms of use gave it the right to conduct this user manipulation study and not just, as Kashmir Hill of Forbes points out, because the word "research" didn't appear in their Data Use Policy until four months after the study took place. As contracts profs know, the under-utilized and under-enforced implied covenant of good faith and fair dealing applies to contracts and is recognized under California law (which governs FB's Terms). The broad language of the data usage policy makes it sound like Facebook will use data to improve its services, not to test whether their users get happy or sad if they manipulate news feeds. Other provisions of FB's agreement with users make it reasonable to reach that conclusion (to keep its services "safe and secure"; to provide users with location features and services, "to make suggestions to you, for internal operations"). Even the language regarding research -- "for internal operations, including troubleshooting, data analysis, testing, research and service improvement" -- when read in context (which it should be), indicates that the purpose of using the data is to enhance the user experience, not to manipulate user behavior.
They also say "your trust is important to us."
Did Facebook act in bad faith by manipulating users' data feeds? It's at least arguable that they did.
Now, about the research results - as far as what the results showed, I'm not sure that the study did prove that positive posts enhanced users moods (and vice versa). A user may have changed the nature of a post in order to conform to the prevailing mood, but that doesn't mean they actually felt happier. Positive posts from others might have forced users to "fake it" by writing more positive posts and vice versa. So I'm not convinced that the research refuted the claim that happy Facebook posts depressed some FB users...
https://lawprofessors.typepad.com/contractsprof_blog/2014/07/about-that-facebook-user-manipulation-study.html