Are we misinformed about disinformation?
13 mins read

Are we misinformed about disinformation?

In June, the journal Nature published one perspective suggesting that the harm of online misinformation has been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterized as three common misconceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms drive this exposure, and that many broader problems in society are predominantly caused by social media.

“People who go on YouTube to watch baking videos and end up on Nazi websites — this is very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher at the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that edge cases don’t matter, he and his colleagues wrote, but treating them as typical can contribute to misunderstandings—and divert attention from more pressing issues.

Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.

Undark: What motivated you and your co-authors to write this perspective?

David Rothschild: The five co-authors of this paper had all done very different research in this space for years, trying to understand what it is that is happening on social media: what is good, what is bad, and especially understand how it differs from stories as we hear from the mainstream media and from other researchers.

Specifically, we were dealing with these questions about what a typical consumer’s experience is, a typical person versus a more extreme example. A lot of what we saw, or a lot of what we understood – it was referenced in a lot of research – really described a pretty extreme scenario.

The other part of it is a lot of emphasis around algorithms, a lot of concern about algorithms. What we’re seeing is that a lot of malicious content isn’t coming from an algorithm pushing it on people. Actually, it’s just the opposite. The algorithm kind of pulls you towards the middle.

And then there are these questions of causation and correlation. Much research, and especially the mainstream media, conflate the proximate cause of something with the underlying cause of it.

There are many who say, “Oh, these yellow vest riots are happening in France. They were organized on Facebook.” Well there have been riots in France for a couple of hundred years, they find ways to organize even without social media.

The proximate cause – the proximate way people organized themselves (January 6) – was really very much online. But then comes the question, could these things have happened in an offline world? And those are tricky questions.

By writing a perspective here in nature, we can really get to stakeholders outside of academia to really address broader discussion because there are real consequences. Research is awarded, funds are awarded, platforms are given pressure to solve the problem that people are discussing.

FN: Can you talk about the example of the 2016 election: What you found about that and also the role that maybe the media played in presenting information that wasn’t entirely accurate?

DR: The bottom line is that what the Russians did in 2016 is really interesting and newsworthy. They invested quite heavily in creating dormant Facebook organizations that posted viral content and then slipped in a bunch of non-true fake news towards the end. Certainly meaningful and certainly something that I understand why people were fascinated by. But ultimately, what we wanted to say is, “How much impact could it likely have?”

“A lot of research, and especially the mainstream media, conflate the proximate cause of something with the underlying cause of it.”

The effect is really hard (to measure), but at least we can put into perspective people’s news diets and show that the amount of views of Russian direct disinformation is only a microscopic part of people’s consumption of news on Facebook – let alone their consumption of Facebook, let alone their consumption of news in general, of which Facebook is only a small part. Especially in 2016, the vast majority of people, even younger people, still consumed far more news on television than they did on social media, let alone online.

While we agree that fake news is probably not good, there is plenty of research to show that repeated interaction with content is indeed what drives the underlying causal understanding of the world, stories, however you want to describe it. Being hit with fake news at times, and at very low numbers for the typical consumer, is simply not the driving force.

UD: My impression from reading your Nature paper is that you discovered that journalists are spreading incorrect information about effects of incorrect information. Is that correct? And why do you think this happens if so?

DR: In the end, it’s a good story. And shades are difficult, very difficult and negative is popular.

UD: So what is a good story, specifically?

DR: That social media is harming your children. That social media is the problem.

There is a general desire to cover things in a more negative light. There is certainly a long history of people freaking out over and subscribing to any social problem on new technology, whether it was the internet, or television, or radio, or music, or books. You can just go back in time and you can see all these kinds of concerns.

Ultimately, there will be people who benefit from social media. There will be people who are harmed by social media, and there will be many people who will evolve with it the way society continues to evolve with new technology. It’s just not as interesting a story as social media causing these problems, without countering it.

“Social media is the problem, and it’s really the algorithms” provides a very simple and straightforward solution, which is to fix the algorithms. And it avoids the harder question – the one we generally don’t want to ask – about human nature.

A lot of the research that we cite here, and the ones that I think make people uncomfortable, is that some part of the population is demanding horrible things. They demand things that are racist, degrading, violent. That demand can be satiated in various social media, just as it has been satiated in advance in other forms of media, whether it was people reading books, or movies, or radio, whatever it was that people listened to or received information from before.

At the end of the day, the different channels that we have available definitely change the ease and distribution and the way in which these are distributed. But the existence of these things is a question of human nature far beyond my ability as a scientist to resolve, far beyond the capacity of many people – most people, everyone. I think that makes it tricky and also makes you uncomfortable. And I think that’s why a lot of journalists like to focus on “bad social media, algorithms are the problem.”

UD: On the same day that Nature published your piece, so did the journal posted a comment titled “Disinformation poses a bigger threat to democracy than you might think.” The authors suggest that “concern about the anticipated blizzard of election-related disinformation is warranted, given the ability of false information to increase polarization and undermine trust in electoral processes.” What does the average person think of these seemingly divergent opinions?

DR: We certainly do not want to give the impression that we condone any piece of misinformation or harmful content or downplay the impact it has, especially on the people it affects. What we’re saying is that it’s concentrated away from the typical consumer into extreme pockets, and it takes a different approach and a different allocation of resources to get to that than the traditional research, and the traditional questions you see come up about targeting a typical consumer, about targeting this mass impact.

I read it and I don’t necessarily think it’s wrong, so much as I don’t see who they’re yelling at, basically, in that paragraph. I don’t think it’s a huge movement — to trivialize — so much as to say, “Hey, we should actually fight it where it is, fight it where the problems are.” I think in a way it’s talking past each other.

UD: You are employed by Microsoft. How would you reassure potentially skeptical readers that your study is not an attempt to play down the negative impact of products that are profitable to technical industry?

DR: This article has four academic co-authors and went through an incredibly rigorous process. You may not (have) noticed on the front page: We submitted this paper on October 13, 2021, and it was finally accepted on April 11, 2024. I’ve had some crazy review processes in my time. This was intense.

We came up with ideas based on our own academic research. We supplemented it with the latest research and continue to supplement it with research that comes in, especially some research that contradicted our original view.

The bottom line is that Microsoft Research is an extremely unique place. For those unfamiliar with it, it was founded under Bell Labs model where there is no review process for publications that come out of Microsoft Research because they believe that the integrity of the work rests on the fact that they don’t censor when they get through. The idea is to use this position to be able to engage in discussions and understanding around the effects of some things that are close to the company, some things that have nothing to do with it.

In this case, I think it’s pretty far off. It really is a great place to be. A lot of work is co-authored with academic collaborators, and it’s really always important to ensure that there are very clear guidelines in the process and ensure the academic integrity of the work that it does.

UD: I forgot to ask you about your team’s methods.

DR: It is obviously different from a traditional research paper. In this case, this definitely started from conversations between the co-authors about joint work and separate work we’ve done that we felt still didn’t cut through in the right places. It really started with putting down some theories that we had about the differences between our academic work, the general academic work and what we saw in the public discussion. And then an extremely thorough literature review.

As you will see, we are somewhere in the over 150 citations – 154 citations. And with this incredibly long review process in Nature, we went line by line to make sure there was nothing that wasn’t unsupported by the literature: either, where appropriate, the academic literature, or, where appropriate, what we could cite from things that were public.

The idea was to really create, hopefully, a comprehensive piece that allowed people to really see what we think is a really important discussion — and that’s why I’m so excited to be talking to you today — about where the real damage is and where the pressure should be.

None of us are convinced to try to pull out a position and stick to it despite new evidence. There are varying models of social media. What we have now with TikTokand Rollsand YouTube Shorts is a completely different experience than the main social media consumption was a few years ago — with longer videos — or the main social media a few years before that with news feeds. These will continue to be something you want to monitor and understand.

This article was originally published on Darkness. Read original article.

Read more

on this topic