3 min read

Digital Lab Rats

Digital Lab Rats

In 2012, Facebook conducted an experiment to measure its impact on users’ emotions. The hypothesis was simple: the stories we see in our News Feed affects our mood – positive stories make us happy, negative posts make us sad. Based on a study of more than half a million users, the researchers concluded that:

"Emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks”.

When the results were published, people were outraged. Facebook had tweaked users’ News Feeds to show them polarizing content and tracked their responses, but did not obtain their explicit consent. Close to 700,000 users appear to have been emotionally manipulated without their knowledge.

This entire episode raises an important question about the ethics of algorithmic manipulation in social science research. Or in other words: Are we all just Facebook's lab rats?

Algorithmic Manipulation

Our social media feeds are already being manipulated. According to some estimates, Facebook’s ranking algorithms have to choose from about 1,500 possible posts to display. One stubborn user even tried to see every post, but failed.

Perhaps it is our desire for personalisation that has normalised this practise. Algorithms rely on our past interactions to make guesses about what we might like to see next. If you interact with multiple posts on a topic, you are likely to see more of the same thing. If you gloss over your friend’s photos, you might never see another update from them again. Some call this a 'filter bubble'.

But there is an alternative model to an algorithmically curated feed. On Twitter for example, you are shown tweets in reverse chronological order, without the platform exercising any algorithmic control. But now even Twitter's executives have openly talked about integrating recommendations into a user’s Timeline.

The incentive for platforms to curate user feeds are obvious to anyone familiar with the mechanics of targeted advertising. Internet companies build flourishing revenue streams by collecting personal data, then curating their feeds to serve relevant ads. That's why platforms use cookies to track users outside of their digital fences and make opting out so inconvenient. As law professor Ryan Calo writes, there will come a time in which advertisers will find it hard to resist the temptation to “uncover and trigger consumer frailty at an individual level.”

The other issue is the bi-directional nature of social media — there's what you see and what others see of you. Are we really what our Facebook profile and activity says that we are? A new study slated to be published says that Facebook's algorithms make it hard to detect personality traits, or as Jason Millar explains, the algorithm 'blended' his personality in ways that lead to great frustration.

Ethics of Big Data Analysis

Although big data analysis for targeted advertising has become a commercial imperative, I wonder if it can be put to better uses. Studies, like the one Facebook conducted, could give us an unbiased view of trends across demographics – life satisfaction by nationality, unemployment rates by age-group, and the evolving use of language across cultures. Or to measure the impact of online courses on teaching styles, and understand how the Arab Spring was organised.

Platforms like Facebook and Twitter provide a distinct advantage here. To get an accurate picture of such phenomena, we need massive data sets. And these platforms provide the largest samples in human history. They are scalable and cheaper and probably more reliable than telephone surveys and postal ballots (just imagine running a psychological experiment with half a billion human subjects on the phone or in person).

However, we need to discuss the ethics of conducting such experiments. Indeed, regulatory approvals mandated for real-world studies, such as the ‘Common Rule’, should be similarly applied for online studies. At the bare minimum, corporations should be required to obtain the ‘informed consent’ of users involved in the experiment. Companies should also be required to publish their privacy policies in relation to the experiment. Being transparent will minimise risks and might actually help bring more users on board.

Moreover, every company conducting an experiment beyond a certain scale should be required to constitute an ethics board. Experts from different fields of study should examine the legality of every research proposal, methods of study and its potential impact. Funding from private corporations above a certain limit should be scrutinised. Some efforts in this direction seem to be already underway. Google / Deep Mind has proposed an ethics board for its AI research and even Facebook has acknowledged that users deserve greater protections for future studies.

Today, scientists observe lab rats to make important scientific discoveries. Perhaps, with some safeguards in place, our human interactions with social media can make some important breakthroughs too.