Facebook recently came under fire for a research study that was published in the Proceedings of the National Academy of Sciences. Accused of manipulating users’ emotions, violating privacy rights and even infringing on free speech, the social network stands by its controversial research that turned almost 700,000 of its 1.11-billion strong user base into guinea pigs.
Although marketers don’t have to make decisions at this scale deciding how to use consumer data, this case poses questions the online community should consider when running social media campaigns.
The research paper, Experimental evidence of massive-scale emotional contagion through social networks, evaluated whether emotions could be spread through non-verbal, non-direct communication. Basically, for a week back in January 2012, Facebook either showed people a sprinkling of extra posts containing negative language or a few additional positive-leaning posts.
Testing mood-altering posts
In the name of product innovation, Facebook wanted to test whether populating users’ feeds with emotionally negative sentiment posts made them more likely to post in a negative or sad tone. Conversely, they wanted to see if an influx of positive social content lead individuals to share optimistic content themselves.
Here’s a brief overview of the findings:
- When users were shown even a few additional negative posts for every thousand-or-so News Feed updates, they did have a tendency to publish angry or sad content themselves
- When users were shown a few extra positive posts for every thousand-or-so News Feed updates, they were more likely to share happy posts themselves
- When users were shown neutral posts (no emotional sway), they were less expressive altogether
“The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product,” said Facebook spokesperson Adam Kramer in response to the backlash. “We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.”
The debate: In the clear, or out of line?
Futher, Facebook claims this doesn’t actually violate any individual rights because when users sign up, they agree to the company’s terms of service, which allows the use of data for analysis, testing and research. However, consumers’ responses to this report suggest Facebook did take its data mining practices outside of users’ comfort zones should be called into question.
- First off, there’s the question of ethics. In terms of the premise: Is it permissible that Facebook, for all intents and purposes, fabricated conditions to see if they could sway users’ (some of whom were minors) emotions? In terms of the conclusion: Is Facebook blameworthy for artificially making users happy or sad? Regardless, the social network is certainly bending the standards of research and dancing on the fine line of human rights violation.
- Second: Privacy. The study valorizes itself as consistent with Facebook’s data use policy, but the legality of the experiment (as determined by both Facebook policy and federal law) is murky at best.
Interestingly, the addition of the word “research” to Facebook’s user agreement occurred four months AFTER the study took place.
Interestingly, the addition of the word “research” to Facebook’s user agreement occurred four months AFTER the study took place. Conspiracy-theorists and whistleblowers are having a heyday piecing together NSA connections and Department of Defense funding to Facebook’s experimental practices both in the open and behind closed doors.
Currently, Facebook is under investigation in the UK and Ireland regarding these grievances.
Facebook says: All in the name of innovation
Still, not everyone is mad about Facebook’s experiment. Forbes staff writer Matthew Herper penned an op-ed letter to Facebook, imploring the social giant to continue its experiments and publish the findings. Herper posits that Facebook will, in all likelihood, persist in its research and it’s better that consumers at least have access to the findings.
Big data – use it or lose it
While most social media strategists are already primed to throw more coal on the Facebook hate train, one can certainly see why the data they are gathering would turn any businessman to the dark side. Herper’s letter might, in fact, ring true for a lot of marketers who face daily pressure to make use of all the DATA at their fingertips. It’s not just Google Analytics, it’s Moz and Hubspot, Sprinklr and Marketo. In the GA Universal Analytics update, marketers were promised the ability to track individual users across digital touchpoints, as long as they update their privacy policies to notify visitors.
So I ask: What can marketers glean from this valuable, albeit dirty, data?
Any marketer worth his or her salt knows the value of experimentation. We A/B test subject lines and titles to find out what readers are most likely to click. We’re encouraged to create content that’s meant to trigger emotional responses in readers, because that’s what drives sales. If experimentation and innovation are the keys to success – where does that leave us?
Obviously Facebook’s study raises more questions than it answers, but it prompts everyone in the digital landscape to ask whether this type of research is warranted and at what point it crosses the line. Ultimately, it comes down to business goals and values. Is upsetting the audience worth the extra engagement and clicks? At what point do you need express permission to stay in the safe zone?