Browse By

Being Facebook’s Lab Rat

If Big Data analytics is the experiment, Facebook is the perfect petri dish.

In early 2012, Facebook conducted an interesting experiment to measure its impact on users’ emotions. The hypothesis was simple — the stories we see in our News Feed affects our mood. Positive stories make us happy, negative posts usually sad.

When the results were published in the Proceedings of the National Academy of Sciences (PNAS) in June this year, most users were outraged. The consensus appears to be that close to 700,000 unsuspecting users were emotionally manipulated. After all Facebook did tweak users’ News Feeds in favour of either emotionally positive or negative content, without obtaining their explicit consent. The backlash seems to have even caught its data scientists off guard, with the lead researcher offering a public explanation (on Facebook, of course).

But in all this commotion we seemed to have lost sight of something quite significant — based on this experiment the researchers concluded that:

“…emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks”.

This also raises a larger, perhaps more pressing issue: Is there a place for algorithms and machine learning in social science research? If so, shouldn’t some one be supervising it?

hapyp sad

(Image from the public domain)

Algorithms Decide Everything

Those who feel violated by Facebook’s experiment might be surprised to learn that our News Feeds are already being ‘manipulated’. Take a moment to consider what happens when someone actually logs in to Facebook. According to some estimates, its algorithms have to choose from about 1,500 possible posts to display. Tim Herrera of The Washington Post even tried to see every post, but failed (quite emphatically).

It appears that our fetish for personalisation has made this an acceptable practise, perhaps even desirable for some. After all, what’s better than having a newspaper curated just for you? But an inevitable consequence of our fixation is that algorithms have to rely on past interactions to make a good guess about what we might like in the future. If you ‘Like’ something, you want more of the same thing. If you browse through your friend’s photo too quickly, you might never see an update from her again. Mat Honan wrote an interesting piece for Wired recently, about his own little experiment, where he ‘Liked’ every Facebook post he saw. What happened next will amaze you. Speaking of which, did you arrive here after reading ‘5 Things Every Facebook Addict Should Know?’

Twitter, on the other hand, avoids such filtering. Users can see every tweet from accounts they follow, in reverse chronological order. However, some of its executives have proposed the introduction of ‘recommendations’ and promoted tweets into the Timeline. This too is not surprising. Anyone familiar with the mechanics of targeted advertising will appreciate the value in retaining control over when, how and what users see on a particular piece of internet real estate. That’s why some have gone so far as to dismiss Ello’s ad-free model as unsustainable in the long run. Internet companies can build flourishing revenue streams around this type of data (as they admittedly already do). It should come as no surprise then that online businesses want to know where you have been outside of its fences as well (Facebook’s Atlas ad platform is a good example of this). Cookies help with such tracking, even as we move to the more ‘convenient’ single-sign-on (SSO) system on mobile devices. Unfortunately, opting out is made deliberately inconvenient for users. Rest assured there will come a time when algorithms can determine exactly what we like and select the perfect stories for our News Feeds and Timelines. At that point it will be hard for advertisers to resist the temptation to “uncover and trigger consumer frailty at an individual level,” as suggested by Professor Ryan Calo.

The other side of the problem relates to the inherently two-dimensional nature of social media — what you see and what others see of you. A study slated to be published in the New Media & Society journal concludes that new algorithms being deployed by Facebook makes it hard to detect personality traits. This  conundrum is best explained by Jason Millar in an essay for Robohub where he describes how Facebook managed to ‘blend’ his personality, leading to great frustration. If there is indeed a difference between what one ‘Facebook Likes’ and actually likes, it may be wise to think twice before liking too many Buzzfeed articles in a row.

Some of these claims are validated, at least anecdotally, by the vastly different user experience across social media — sites that aggressively filter content and ones that exercise restraint. For example, during the Ferguson episode, some users complained of delayed coverage on Facebook, and wondered whether it would have even caught on if not for Twitter. This begs the question – should we be worrying about ‘algorithmic censorship’ in the context of net neutrality? It seems obvious that our ‘connections’ influence the content we consume online. Social media platforms even distinguish themselves on this fact. While Twitter and Google+ rely on ‘unidirectional consent’ to build networks, Facebook friend lists feature acquaintances and relatives as a result of the ‘bidirectional consent’ requirement based on mutual friendship. If ‘relatives’ and ‘interesting people’ mostly overlap in your circles, consider yourself lucky.

Ethics of Big Data Analysis 

Social networking platforms like Facebook, Twitter and LinkedIn have become vital tools for personal and professional communication. Although targeted advertising is a commercial imperative for many, one wonder whether the collective analysis of personal information (or ‘big data’) can be applied for more useful purposes in the future. Could we, for example, utilise this data to understand how the Arab Spring was organised, the impact of massive open online courses (MOOC’s) on teaching pedagogies, or whether social media helps get voters to the booth?

There is already some evidence of the benefits of big data analysis. Studies, like the one Facebook conducted, could give us an ‘unbiased’ view of trends across demographics – life satisfaction by nationality, unemployment rates by age-group and the evolving use of language across cultures. Maybe even a true depiction of society’s darker issues such as jealousy, violence and depression.

To get an accurate picture of such phenomena, we need massive data sets. And it is hard to deny that Facebook provides one of the largest samples in human history. Despite its imperfections, it beats telephone surveys and postal ballots, not to mention is highly scalable and vastly cheaper especially given the growth of autonomous systems. Imagine running a psychological experiment with over a billion human subjects, minus the pain of physical attendance and the rigidity of fixed schedules.

Facebook is a giant laboratory and we are its rats. Better get used to it.

lab rat

(Image from Vdegroot under a GNU FD license)

The ethics of conducting such experiments is not black and white, and corporations should rise up to the challenge. Indeed, the regulatory approvals mandated for real-world studies, such as the ‘Common Rule’ in the United States, should be strictly applied to any study involving human subjects, even if conducted on the internet. At the bare minimum, corporations should be required to obtain the ‘informed consent’ of those users involved in the experiment. Penalties can be monetary, but should also require companies to publicise their aggregation and anonymisation practices and policies. Being transparent might actually help bring users on board. Some basic diligence by Facebook and others might even help minimise legal liability and bad PR in the future.

Ideally, every company beyond a certain user-threshold should be required to constitute an ethics board. Experts from different fields of study should examine the legality of every research proposal, methods of study and its potential impact. Funding from private corporations above a certain limit should be strictly scrutinised. It is heartening to note that some companies are already taking steps in this direction. Google has constituted an internal ethics board in anticipation of its foray into unchartered territories such as artificial intelligence, robotics and healthcare. (Update: Facebook has also stated that it will subject human subject research to greater internal scrutiny, but with few details).

Scientists today use lab rats to devise cancer treatments and understand human psychology. Perhaps, with a little effort, our aimless social networking can also count for something. Did someone say cheese?

About Amlan Mohanty

Amlan is a lawyer based in India. He has been writing about technology and intellectual property for the last five years.