Two scientists writing in the Nov. 27 issue of Science are cautioning that social scientists using data from social media like Twitter and Facebook are frequently getting biased results.
A growing number of academic researchers are mining social media data to learn about both online and offline human behaviour. In recent years, studies have claimed the ability to predict everything from summer blockbusters to fluctuations in the stock market.
But mounting evidence of flaws in many of these studies points to a need for researchers to be wary of serious pitfalls that arise when working with huge social media data sets, according to computer scientists at McGill University in Montreal and Carnegie Mellon University in Pittsburgh.
Such erroneous results can have huge implications: thousands of research papers each year are now based on data gleaned from social media. “Many of these papers are used to inform and justify decisions and investments among the public and in industry and government,” says Derek Ruths, an assistant professor in McGill’s School of Computer Science.
In an article published in the Nov. 28 issue of the journal Science, Ruths and Jürgen Pfeffer of Carnegie Mellon’s Institute for Software Research highlight several issues involved in using social media data sets – along with strategies to address them. Among the challenges:
Different social media platforms attract different users – Pinterest, for example, is dominated by females aged 25-34 – yet researchers rarely correct for the distorted picture these populations can produce.
Publicly available data feeds used in social media research don’t always provide an accurate representation of the platform’s overall data – and researchers are generally in the dark about when and how social media providers filter their data streams.
The design of social media platforms can dictate how users behave and, therefore, what behaviour can be measured. For instance, on Facebook the absence of a “dislike” button makes negative responses to content harder to detect than positive “likes”.
Large numbers of spammers and bots, which masquerade as normal users on social media, get mistakenly incorporated into many measurements and predictions of human behaviour.
Researchers often report results for groups of easy-to-classify users, topics, and events, making new methods seem more accurate than they actually are. For instance, efforts to infer political orientation of Twitter users achieve barely 65% accuracy for typical users – even though studies (focusing on politically active users) have claimed 90% accuracy.
Many of these problems have well-known solutions from other fields such as epidemiology, statistics, and machine learning, Ruths and Pfeffer write. “The common thread in all these issues is the need for researchers to be more acutely aware of what they’re actually analyzing when working with social media data,” Ruths says.
Social scientists have honed their techniques and standards to deal with this sort of challenge before. “The infamous ‘Dewey Defeats Truman’ headline of 1948 stemmed from telephone surveys that under-sampled Truman supporters in the general population,” Ruths notes. ”Rather than permanently discrediting the practice of polling, that glaring error led to today’s more sophisticated techniques, higher standards, and more accurate polls. Now, we’re poised at a similar technological inflection point. By tackling the issues we face, we’ll be able to realize the tremendous potential for good promised by social media-based research.”